Long-term memory often plays a pivotal role in human cognition through the analysis of contextual information. Machine learning researchers have attempted to emulate this process through the development of memory-augmented neural networks (MANNs) to leverage indirectly related but resourceful historical observations during learning and inference. The area of MANN, however, is still in its infancy and significant research effort is required to enable machines to achieve performance close to the human cognition process. This article presents an innovative MANN framework for the advanced incorporation of historical knowledge into a predictive framework. Within the key-value memory structure, we propose to decouple the key representations from the learned value memory embeddings to offer improved associations between the inputs and latent memory embeddings. We argue that the keys should be static, sparse, and unique representations of a particular observation to offer robust input to memory associations, while the value embeddings could be trainable, dense latent vectors such that they can better capture historical knowledge. Moreover, we introduce a novel memory update procedure that preserves the explainability of the historical knowledge extraction process, which would enable the human end-users to interpret the deep machine learning model decisions, fostering their trust. With extensive experiments conducted on three different datasets using audio, text, and image modalities, we demonstrate that our proposed innovations collectively allow this framework to outperform the current state-of-the-art methods by significant margins, irrespective of the modalities or the downstream tasks. The code is available at https://github.com/tha725/DE-KVMN/tree/main.