Since the emergence of the COVID-19 pandemic, various methods to detect the illness from cough and speech audio data have been proposed. While many of them deliver promising results, they lack transparency in the form of expla-nations which is crucial for establishing trust in the classifiers. We propose CoughLIME which extends LIME to explanations for audio data, specifically tailored towards cough data. We show that CoughLIME is capable of generating faithful sonified explanations for COVID-19 detection. To quantify the performance of the explanations generated for the CIdeR model, we adopt pixel flipping to audio and introduce a novel metric to assess the performance of the XAI classifier. CoughLIME achieves a ΔAUC of 19.48 % generating explanations for CIdeR's predictions.