Open Access Thesis
Date of Award
Although deep learning systems trained on medical images have shown state-of-the-art performance in many clinical prediction tasks, recent studies demonstrate that these systems can be fooled by carefully crafted adversarial images. It has raised concerns on the practical deployment of deep learning based medical image classification systems. Although an array of defense techniques have been developed and proved to be effective in computer vision, defending against adversarial attacks on medical images remains largely an uncharted territory due to their unique challenges: crafted adversarial noises added to a highly standardized medical image can make it a hard sample for model to predict and label scarcity limits adversarial generalizability. To tackle these challenges, we propose two defending methods: one unsupervised learning approach to detect those crafted hard samples and one robust medical imaging AI framework based on an additional Semi-Supervised Adversarial Training (SSAT) module to enhance the overall system robustness, followed by a new measure for assessing systems adversarial risk. We systematically demonstrate the advantages of our methods over the existing adversarial defense techniques under diverse real-world settings of adversarial attacks using benchmark X-ray and OCT imaging data sets.
Li, Xin, "Defending Against Adversarial Attacks On Medical Imaging Ai Systems" (2022). Wayne State University Theses. 872.