Cancer is a pressing public health problem and one of the main causes of mortality worldwide. The development of advanced computational methods for predicting cancer survival is pivotal in aiding clinicians to formulate effective treatment strategies and improve patient quality of life. Recent advances in survival prediction methods show that integrating diverse information from various cancer-related data, such as pathological images and genomics, is crucial for improving prediction accuracy. Despite promising results of existing approaches, there are great challenges of modality gap and semantic redundancy presented in multiple cancer data, which could hinder the comprehensive integration and pose substantial obstacles to further enhancing cancer survival prediction. In this study, we propose a novel agnostic-specific modality learning (ASML) framework for accurate cancer survival prediction. To bridge the modality gap and provide a comprehensive view of distinct data modalities, we employ an agnostic-specific learning strategy to learn the commonality across modalities and the uniqueness of each modality. Moreover, a cross-modal fusion network is exerted to integrate multimodal information by modeling modality correlations and diminish semantic redundancy in a divide-and-conquer manner. Extensive experiment results on three TCGA datasets demonstrate that ASML reaches better performance than other existing cancer survival prediction methods for multiple data.