Off-campus WSU users: To download campus access dissertations, please use the following link to log into our proxy server with your WSU access ID and password, then click the "Off-campus Download" button below.

Non-WSU users: Please talk to your librarian about requesting this dissertation through interlibrary loan.

Access Type

WSU Access

Date of Award

January 2022

Degree Type

Dissertation

Degree Name

Ph.D.

Department

Computer Science

First Advisor

Dongxiao Zhu

Abstract

Deep neural networks (DNNs) has attracted much attention in machine learning community due to its state-of-the-art performance on various tasks. Despite the successes, interpreting a complex DNN still remains an open problem, hindering its wide deployment in safety and security-critical domains. Hence understanding the interpretability and explainability of the DNNs become a critical problem in machine learning.

In this dissertation, we first survey fundamental works and recent advancements in the interpretable machine learning research domain in Chapter 1. In Chapter 2, we tackle the interpretability problem in recommender systems by designing and implementing a feature mapping strategy into the recommender systems, so-called Attentive Multitask Collaborative Filtering (AMCF). We also evaluate the performance of AMCF in terms of model-level interpretability and user-level explanability. In Chapter 3, we propose a gradient based DNN interpretation strategy called Adversarial Gradient Integration (AGI) utilizing the back-propagation and adversarial effects. Decomposing a classification problem into multiple discriminating problems, we eliminates the inconsistency issues of the competing methods introduced by arbitrary baselines and selection of paths. And in Chapter 4, we propose a novel direction for DNN interpretation called gradient accumulation methods, which views the model's output as the accumulation of gradients over the input space. We also implement Negative Flux Aggregation (NeFLAG), which calculates the gradient accumulation exploiting the concept of divergence and flux in vector analysis.

In Chapter 5, we conclude the dissertation by summarizing original contributions to the area of interpretable machine learning, and point out some more promising directions.

Off-campus Download

Share

COinS