"Neural Self-Management: An Autonomous Framework for Continual Learning" by Christopher Angelini

Date Approved

5-12-2024

Embargo Period

5-12-2027

Document Type

Dissertation

Degree Name

Ph.D. Electrical and Computer Engineering

Department

Electrical and Computer Engineering

College

Henry M. Rowan College of Engineering

Advisor

Nidhal Bouaynaya, Ph.D.

Committee Member 1

Shen Shyang Ho, Ph.D.

Committee Member 2

Huaxia Wang, Ph.D.

Committee Member 3

Ghulam Rasool, Ph.D.

Committee Member 4

Ramaswamy Srinisvan, Ph.D.

Keywords

Continual Learning;Moment Propagation;OOD Detection;Uncertainty Quantification

Abstract

In the highly dynamic environments of the real world, it is unreasonable to assume data seen during deployment will mimic the training distribution when designing Deep Neural Networks (DNNs). Without the ability to adapt to new information after deployment, DNNs frequently required costly retraining and redeployment. In this work, we develop multiple Continual Learning frameworks to prevent catastrophic forgetting by leveraging a probabilistic deep learning framework called Moment Propagation (MP). Properties of the MP are explored, including: its ability to compress network representations, identify important parameters to current network representation, and detect of out-of-distribution data samples with respect to the network's training distribution using predictive variance. These concepts are leveraged in a continual learning setting to mitigate catastrophic forgetting over a sequence of tasks. Catastrophic forgetting mitigation is achieved in a task incremental learning with five different parameter regularization-based methodologies, each regularizing network parameters in different ways. Each approach is evaluated on multiple benchmark datasets and compared to existing methods. The ability to determine when to adapt to new information during inference, modeled by untrained task data, is demonstrated with our fourth method presented, Uncertainty-based Feature Masking. Finally, the ability to significantly mitigate catastrophic forgetting in a class incremental learning setting, where task information is not provided during inference, is demonstrated with our final method presented, Dynamic Neural Architecture.

Available for download on Wednesday, May 12, 2027

Share

COinS