Date Approved

5-23-2022

Embargo Period

5-25-2022

Document Type

Thesis

Degree Name

M.S. Computer Science

Department

Computer Science

College

College of Science & Mathematics

Advisor

Shen-Shyang Ho

Committee Member 1

Anthony Breitzman, Ph.D.

Committee Member 2

Ning Wang, Ph.D.

Keywords

Continual Learning, Embedded Devices

Subject(s)

Machine learning

Disciplines

Computer Sciences

Abstract

Continual Learning (CL) is a machine learning approach which focuses on continuous learning of data rather than single dataset-based learning. In this thesis, this same focus is applied with respect to the field of machine learning for embedded devices which is still in the early stages of development. This focus is further used to develop various algorithms such as utilizing prior trained starting networks, weighted output schemes, and replay or reduced datasets for training while maintaining a consistent focus on low resource devices to maintain acceptable performance. The experimental results show an improvement in model training times as compared to the time to train a neural network using all available information with the following accuracy for the Fashion MNIST dataset (~90% to 73% accuracy on 10 classes with a factor of 10 reduction in training time). The other main result showed a reduction in required memory as only 1 class size worth of data is required to be stored at a time rather than the full dataset for non-Replay algorithms. For the Replay based algorithms, this is still reduced to less than 2 classes worth of data for 10 classes which is an 80% reduction overall in memory. This was done with the goal of creating a usable model while a fully trained network is developed on backend systems to limit overall downtime and still maintain system performance.

Share

COinS