Copy the page URI to the clipboard
Wiwatcharakoses, Chayut and Berrar, Daniel
(2020).
DOI: https://doi.org/10.1016/j.eswa.2019.113069
Abstract
The goal of continuous learning is to acquire and fine-tune knowledge incrementally without erasing already existing knowledge. How to mitigate this erasure, known as catastrophic forgetting, is a grand challenge for machine learning, specifically when systems are trained on evolving data streams. Self-organizing incremental neural networks (SOINN) are a class of neural networks that are designed for continuous learning from non-stationary data. Here, we propose a novel method, SOINN+, which is fundamentally different from the previous generations of SOINN with respect to how “forgetting” is modeled. To achieve a more graceful forgetting, we developed three new concepts: idle time, trustworthiness, and unutility of a node. SOINN+ learns a topology-preserving mapping of the input data to a network structure, which reveals clusters of arbitrary shapes in streams of noisy data. Our experiments with synthetic and real-world data sets showed that SOINN+ can maintain discovered structures even under sudden and recurring concept drifts.