Document Type

Article

Version Deposited

Published Version

Open Access Funding Source

Open Access Publishing Fund

Publication Date

2-23-2024

Publication Title

IEEE Access

DOI

10.1109/ACCESS.2024.3369488

Abstract

Artificial intelligence and neuroscience have a long and intertwined history. Advancements in neuroscience research have significantly influenced the development of artificial intelligence systems that have the potential to retain knowledge akin to humans. Building upon foundational insights from neuroscience and existing research in adversarial and continual learning fields, we introduce a novel framework that comprises two key concepts: feature distillation and re-consolidation. The framework distills continual learning (CL) robust features and rehearses them while learning the next task, aiming to replicate the mammalian brain's process of consolidating memories through rehearsing the distilled version of the waking experiences. Furthermore, the proposed framework emulates the mammalian brain's mechanism of memory re-consolidation, where novel experiences influence the assimilation of previous experiences via feature re-consolidation. This process incorporates the new understanding of the CL model after learning the current task into the CL-robust samples of the previous task(s) to mitigate catastrophic forgetting. The proposed framework, called Robust Rehearsal, circumvents the limitations of existing CL frameworks that rely on the availability of pre-trained Oracle CL models to pre-distill CL-robustified datasets for training subsequent CL models. We conducted extensive experiments on three datasets, CIFAR10, CIFAR100, and real-world helicopter attitude datasets, demonstrating that CL models trained using Robust Rehearsal outperform their counterparts' baseline methods. In addition, we conducted a series of experiments to assess the impact of changing memory sizes and the number of tasks, demonstrating that the baseline methods employing robust rehearsal outperform other methods trained without robust rehearsal. Lastly, to shed light on the existence of diverse features, we explore the effects of various optimization training objectives within the realms of joint, continual, and adversarial learning on feature learning in deep neural networks. Our findings indicate that the optimization objective dictates feature learning, which plays a vital role in model performance. Such observation further emphasizes the importance of rehearsing the CL-robust samples in alleviating catastrophic forgetting. In light of our experiments, closely following neuroscience insights can contribute to developing CL approaches to mitigate the long-standing challenge of catastrophic forgetting.

Comments

IEEE Access is IEEE's Open Access journal. Copyright is retained by the authors.


Publication of this article was supported by the Rowan University Libraries 2023-24 Open Access Publishing Fund.

Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Share

COinS