Reducing catastrophic forgetting with learning on synthetic data
- Wojciech Masarczyk,
- Ivona Tautkute
Catastrophic forgetting is a problem caused by neural networks' inability to learn data in sequence. After learning two tasks in sequence, performance on the first one drops significantly. This is a serious disadvantage that prevents many deep learning applications to real-life problems where not all object classes are known beforehand; or change in data requires adjustments to the model. To reduce this problem we investigate the use of synthetic data, namely we answer a question: Is it possible to generate such data synthetically which learned in sequence does not result in catastrophic forgetting? We propose a method to generate such data in two-step optimisation process via meta-gradients. Our experimental results on Split-MNIST dataset show that training a model on such synthetic data in sequence does not result in catastrophic forgetting. We also show that our method of generating data is robust to different learning scenarios.
- Record ID
- Publication size in sheets
- O’Conner Lisa Lisa O’Conner (eds.): Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE Computer Society Conference on Computer Vision and Pattern Recognition workshops, 2020, Institute of Electrical and Electronics Engineers, ISBN 978-1-7281-9360-1. DOI:10.1109/CVPRW50498.2020 Opening in a new tab
- DOI:10.1109/CVPRW50498.2020.00134 Opening in a new tab
- https://ieeexplore.ieee.org/document/9150615 Opening in a new tab
- (en) English
- Score (nominal)
- Score source
- = 20.0, 14-06-2021, ChapterFromConference
- Uniform Resource Identifier
* presented citation count is obtained through Internet information analysis and it is close to the number calculated by the Publish or PerishOpening in a new tab system.