The off-the-shelf model for unsupervised domain adaptation (OSUDA) has been introduced to protect patient data privacy and intellectual property of the source domain without access to the labeled source domain data. Yet, an off-the-shelf diagnosis model, deliberately compromised by backdoor attacks during the source domain training phase, can function as a parasite-host, disseminating the backdoor to the target domain model during the OSUDA stage. Because of limitations in accessing or controlling the source domain training data, OSUDA can make the target domain model highly vulnerable and susceptible to prominent attacks. To sidestep this issue, we propose to quantify the channel-wise backdoor sensitivity via a Lipschitz constant and, explicitly, eliminate the backdoor infection by overwriting the backdoor-related channel kernels with random initialization. Furthermore, we propose to employ an auxiliary model with a full source model to ensure accurate pseudo-labeling, taking into account the controllable, clean target training data in OSUDA. We validate our framework using a multi-center, multi-vendor, and multi-disease (M&M) cardiac dataset. Our findings suggest that the target model is susceptible to backdoor attacks during OSUDA, and our defense mechanism effectively mitigates the infection of target domain victims.
Keywords: Backdoor attacks; Medical AI security; Off-the-shelf unsupervised domain adaptation; cardiac MRI.