Autism Spectrum Disorder (ASD) is a neurodevelopmental condition characterized by differences in social communication and repetitive behaviors, often associated with atypical visual attention patterns. In this paper, the Gaze-Based Autism Classifier (GBAC) is proposed, which is a Deep Neural Network model that leverages both data distillation and data attribution techniques to enhance ASD classification accuracy and explainability. Using data sampled by eye tracking sensors, the model identifies unique gaze behaviors linked to ASD and applies an explainability technique called TracIn for data attribution by computing self-influence scores to filter out noisy or anomalous training samples. This refinement process significantly improves both accuracy and computational efficiency, achieving a test accuracy of 94.35% while using only 77% of the dataset, showing that the proposed GBAC outperforms the same model trained on the full dataset and random sample reductions, as well as the benchmarks. Additionally, the data attribution analysis provides insights into the most influential training examples, offering a deeper understanding of how gaze patterns correlate with ASD-specific characteristics. These results underscore the potential of integrating explainable artificial intelligence into neurodevelopmental disorder diagnostics, advancing clinical research by providing deeper insights into the visual attention patterns associated with ASD.
Keywords: TracIn method; autism spectrum disorder; deep neural networks; explainability; eye tracking sensors; gaze analysis.