The increasing size of medical image archives and the complexity of medical images have led to the development of medical content-based image retrieval (CBIR) systems. These systems use the visual content of images for image retrieval in addition to conventional textual annotation, and have become a useful technique in biomedical data management. Existing CBIR systems are typically designed for use with single-modality images, and are restricted when multi-modal images, such as co-aligned functional positron emission tomography and anatomical computed tomography (PET/CT) images, are considered. Furthermore, the inherent spatial relationships among adjacent structures in biomedical images are not fully exploited. In this study, we present an innovative retrieval system for dual-modality PET/CT images by proposing the use of graph-based methods to spatially represent the structural relationships within these images. We exploit the co-aligned functional and anatomical information in PET/CT, using attributed relational graphs (ARG) to represent both modalities spatially and applying graph matching for similarity measurements. Quantitative evaluation demonstrated that our dual-modal ARG enabled the CBIR of dual-modality PET/CT. The potential of our dual-modal ARG in clinical application was also explored.