Date Approved

7-26-2019

Embargo Period

8-6-2019

Document Type

Thesis

Degree Name

M.S. Electrical Engineering

Department

Electrical and Computer Engineering

College

Henry M. Rowan College of Engineering

Advisor

Ramachandran, Ravi

Committee Member 1

Head, Linda

Committee Member 2

Thayasivam, Uma

Keywords

computer vision, digital signal processing, emotion recognition, facial feature extraction, feature fusion

Subject(s)

Emotion recognition; Face perception; Pattern recognition systems

Disciplines

Electrical and Computer Engineering

Abstract

Computerized emotion recognition systems can be powerful tools to help solve problems in a wide range of fields including education, healthcare, and marketing. Existing systems use digital images or live video to track facial expressions on a person's face and deduce that person's emotional state. The research presented in this thesis explores combinations of several facial feature extraction techniques with different classifier algorithms. Namely, the feature extraction techniques used in this research were Discrete Cosine/Sine Transforms, Fast Walsh-Hadamard Transform, Principle Component Analysis, and a novel method called XPoint. Features were extracted from both global (using the entire facial image) and local (using only facial regions like the mouth or eyes) contexts and classified with Linear Discriminant Analysis and k-Nearest Neighbor algorithms. Some experiments also fused many of these features into one system in an effort to create even more accurate systems.

The system accuracy for each feature extraction method/classifier combination was calculated and discussed. The combinations that performed the best produced systems between 85%-90% accurate. The most accurate systems utilized Discrete Sine Transform from global and local features in a Linear Discriminant Analysis classifier, as well as feature fusion of all features in a Linear Discriminant Classifier.

Share

COinS