Access Type

Open Access Dissertation

Date of Award

January 2018

Degree Type

Dissertation

Degree Name

Ph.D.

Department

Computer Science

First Advisor

Jing Hua

Abstract

3D face reconstruction and facial expression analytics using 3D facial data are new

and hot research topics in computer graphics and computer vision. In this proposal, we first

review the background knowledge for emotion analytics using 3D morphable face model, including

geometry feature-based methods, statistic model-based methods and more advanced

deep learning-bade methods. Then, we introduce a novel 3D face modeling and reconstruction

solution that robustly and accurately acquires 3D face models from a couple of images

captured by a single smartphone camera. Two selfie photos of a subject taken from the

front and side are used to guide our Non-Negative Matrix Factorization (NMF) induced

part-based face model to iteratively reconstruct an initial 3D face of the subject. Then, an

iterative detail updating method is applied to the initial generated 3D face to reconstruct

facial details through optimizing lighting parameters and local depths. Our iterative 3D

face reconstruction method permits fully automatic registration of a part-based face representation

to the acquired face data and the detailed 2D/3D features to build a high-quality

3D face model. The NMF part-based face representation learned from a 3D face database

facilitates effective global and adaptive local detail data fitting alternatively. Our system

is flexible and it allows users to conduct the capture in any uncontrolled environment. We

demonstrate the capability of our method by allowing users to capture and reconstruct their

3D faces by themselves.

Based on the 3D face model reconstruction, we can analyze the facial expression and

the related emotion in 3D space. We present a novel approach to analyze the facial expressions

from images and a quantitative information visualization scheme for exploring this

type of visual data. From the reconstructed result using NMF part-based morphable 3D face

model, basis parameters and a displacement map are extracted as features for facial emotion

analysis and visualization. Based upon the features, two Support Vector Regressions (SVRs)

are trained to determine the fuzzy Valence-Arousal (VA) values to quantify the emotions.

The continuously changing emotion status can be intuitively analyzed by visualizing the

VA values in VA-space. Our emotion analysis and visualization system, based on 3D NMF

morphable face model, detects expressions robustly from various head poses, face sizes and

lighting conditions, and is fully automatic to compute the VA values from images or a sequence

of video with various facial expressions. To evaluate our novel method, we test our

system on publicly available databases and evaluate the emotion analysis and visualization

results. We also apply our method to quantifying emotion changes during motivational interviews.

These experiments and applications demonstrate effectiveness and accuracy of

our method.

In order to improve the expression recognition accuracy, we present a facial expression

recognition approach with 3D Mesh Convolutional Neural Network (3DMCNN) and a visual

analytics guided 3DMCNN design and optimization scheme. The geometric properties of the

surface is computed using the 3D face model of a subject with facial expressions. Instead of

using regular Convolutional Neural Network (CNN) to learn intensities of the facial images,

we convolve the geometric properties on the surface of the 3D model using 3DMCNN. We

design a geodesic distance-based convolution method to overcome the difficulties raised from

the irregular sampling of the face surface mesh. We further present an interactive visual

analytics for the purpose of designing and modifying the networks to analyze the learned

features and cluster similar nodes in 3DMCNN. By removing low activity nodes in the network,

the performance of the network is greatly improved. We compare our method with the regular CNN-based method by interactively visualizing each layer of the networks and

analyze the effectiveness of our method by studying representative cases. Testing on public

datasets, our method achieves a higher recognition accuracy than traditional image-based

CNN and other 3D CNNs. The presented framework, including 3DMCNN and interactive

visual analytics of the CNN, can be extended to other applications.

Share

COinS