Date Approved

8-17-2017

Embargo Period

8-22-2017

Document Type

Thesis

Degree Name

MS Electrical & Computer Engineering

Department

Electrical & Computer Engineering

College

Henry M. Rowan College of Engineering

Advisor

Polikar, Robi

Committee Member 1

Thayasivam, Umashangar

Committee Member 2

Ramachandaran, Ravi P.

Keywords

computational algorithms, concept drift, learning in initially labeled nonstationary environments

Subject(s)

Machine learning; Computational intelligence

Disciplines

Electrical and Computer Engineering

Abstract

One of the more challenging real-world problems in computational intelligence is to learn from non-stationary streaming data, also known as concept drift. Perhaps even a more challenging version of this scenario is when -- following a small set of initial labeled data -- the data stream consists of unlabeled data only. Such a scenario is typically referred to as learning in initially labeled nonstationary environment, or simply as extreme verification latency (EVL). This thesis introduces two different algorithms to operate in this domain. One of these algorithms is a simple modification of our prior work, COMPOSE (COMPacted Object Sample Extraction), that allows the algorithm to work without its extremely computationally expensive core support extraction module. We call this modified algorithm FAST COMPOSE. The other algorithm we propose that works in this setting is based on the importance weighting domain adaptation approach. We explore importance weighting to match distributions between two consecutive time steps, and estimate the posterior distribution of the unlabeled data using importance weighted least squares probabilistic classifier. The estimated labels are then iteratively used as the training data for the next time step. We call this algorithm LEVEL_IW, short for Learning Extreme Verification Latency with Importance Weighting. An additional important contribution of this thesis is a comprehensive survey and comparative analysis of competing algorithms to point out the weaknesses and strengths of different approaches from three different perspectives: classification accuracy, computational complexity and parameter sensitivity using several synthetic and real world datasets.

Share

COinS