Document Type
Article
Version Deposited
Submitted for publication (PrePrint)
Publication Date
11-26-2020
Abstract
One of the more challenging real-world problems in computational intelligence is to learn from non-stationary streaming data, also known as concept drift. Perhaps even a more challenging version of this scenario is when -- following a small set of initial labeled data -- the data stream consists of unlabeled data only. Such a scenario is typically referred to as learning in initially labeled nonstationary environment, or simply as extreme verification latency (EVL). Because of the very challenging nature of the problem, very few algorithms have been proposed in the literature up to date. This work is a very first effort to provide a review of some of the existing algorithms (important/prominent) in this field to the research community. More specifically, this paper is a comprehensive survey and comparative analysis of some of the EVL algorithms to point out the weaknesses and strengths of different approaches from three different perspectives: classification accuracy, computational complexity and parameter sensitivity using several synthetic and real world datasets.
Recommended Citation
Muhammad Umer & Robi Polikar. 2020. Comparative Analysis of Extreme Verification Latency Learning Algorithms. arXiv:2011.14917v1 [cs.LG].
Comments
This is a pre-print published to the arXiv site.