Date Approved


Embargo Period


Document Type


Degree Name

Doctor of Philosophy (Ph.D.)


Electrical and Computer Engineering


Henry M. Rowan College of Engineering


Nidhal Carla Bouaynaya, Ph.D.

Committee Member 1

Ravi Prakash Ramachandran, Ph.D.

Committee Member 2

Robi Polikar, Ph.D.

Committee Member 3

Ghulam Rasool, Ph.D.

Committee Member 4

Shlomo Engelberg, Ph.D.


Artificial Intelligence; Catastrophic Forgetting; Continual Learning; Deep Learning; Machine Learning; Neuroscience


Deep learning (Machine learning)


Electrical and Computer Engineering | Physical Sciences and Mathematics


Continual learning (CL) enables deep learning models to learn new tasks sequentially while preserving performance on previously learned tasks, akin to the human's ability to accumulate knowledge over time. However, existing approaches to CL face the challenge of catastrophic forgetting, which occurs when a model's performance on previously learned tasks declines after learning the new task. In this dissertation, we focus on the crucial role of input data features in determining the robustness of CL models to mitigate catastrophic forgetting. We propose a framework to create CL-robustified versions of standard datasets using a pre-trained Oracle CL model. Our experiments show that the CL model trained on CL-robust features mitigates catastrophic forgetting. We then introduce a novel approach inspired by neuroscience called robust rehearsal, which distills CL-robust samples without needing a pre-trained Oracle model or pre-distilled CL-robust sample for training the CL model. We demonstrate that rehearsal-based CL approaches mitigate catastrophic forgetting using CIFAR10, CIFAR100, and Federal Aviation Administration (FAA) provided real-world helicopter attitude datasets. Moreover, we propose an approach to address the observed issue of overfitting rehearsal memory through adversarial diversification. This approach increases the complexity of rehearsal samples, reduces memory overfitting, and maintains their effectiveness throughout sequential learning. Finally, we conducted an extensive study to further elucidate the crucial role of features in shaping the model's overall characteristics, specifically its robustness and mitigating catastrophic forgetting.

Available for download on Thursday, May 01, 2025