Off-campus WSU users: To download campus access dissertations, please use the following link to log into our proxy server with your WSU access ID and password, then click the "Off-campus Download" button below.

Non-WSU users: Please talk to your librarian about requesting this dissertation through interlibrary loan.

Access Type

WSU Access

Date of Award

January 2018

Degree Type

Dissertation

Degree Name

Ph.D.

Department

Computer Science

First Advisor

XUEWEN CHEN

Abstract

Convolutional neural networks (CNNs) attain state-of-the-art performance on various classification tasks assuming a sufficiently large number of labeled training examples. Unfortunately, curating sufficiently large labeled training dataset requires human involvement, which is expensive, time-consuming, and susceptible to noisy labels. Semi-supervised learning methods can alleviate the aforementioned problems by employing one of two techniques. First, utilizing a limited number of labeled data in conjunction with sufficiently large unlabeled data to construct a classification model. Second, exploiting sufficiently large noisy label training data to learn a classification model. In this dissertation, we proposed a few new methods to mitigate the aforementioned problems. We summarize our main contributions in three main facets describe below.

First, we presented anew Hybrid Residual Network Method (HyResNet) that exploits the power of both supervised and unsupervised deep learning methods into a single deep supervised learning model. Our experiments show the efficacy of HyResNet on visual object recognition tasks. We tested HyResNet on benchmark datasets with various configurations and settings. HyResNet showed comparable results to the state-of-the-art methods on the benchmark datasets.

Second, we proposed a deep semi-supervised learning method (DSSL). DSSL utilizes both supervised and unsupervised neural networks. The novelty of DSSL originates from its nature in employing a limited number of labeled training examples in conjunction with sufficiently large unlabeled examples to create a classification model. The combination of DSSL architecture and self-training has a joint impact on the performance over the DSSL. We measured the performance of DSSL method on five benchmark datasets with various labeled / unlabeled levels of training examples and then compared our results with state-of-the-art methods. The experiments show that DSSL sets a new state-of-the-art record for various benchmark tasks.

Finally, we introduced a new teacher/student semi-supervised deep learning methods (TS-DSSL). TS-DSSL accepts an input of noisy label training dataset then it employs a self-training and self-cleansing techniques to train a deep learning model. The integration of TS-DSSL architecture with the training protocol maintain the stability of the model and enhance the overall model performance. We evaluated the performance of TS-DSSL on benchmark semi-supervised learning tasks with different levels of noisy labels synthesized from different noise distributions. The experiments showed that TS-DSSL sets a new state-of-the-art record on the benchmark tasks.

Off-campus Download

Share

COinS