论文标题
图像分类的记忆有效的课堂学习学习
Memory Efficient Class-Incremental Learning for Image Classification
论文作者
论文摘要
借助内存限制的限制,当更新新添加的类时,更新联合分类模型时,类吸收学习(CIL)通常会遭受“灾难性遗忘”问题。为了应对遗忘问题,许多CIL方法通过将一些示例样本保存到大小约束的内存缓冲区中来传递旧类的知识。为了更有效地利用内存缓冲区,我们建议保留更多的辅助低保真示例样本,而不是原始的真实高保真示例样本。这样的记忆有效的示例保存方案使旧阶级知识转移更加有效。但是,低保真示例样品通常分布在不同的域中,远离原始示例样品的域,即域移位。为了减轻这个问题,我们提出了一种二重奏学习方案,该方案试图构建兼容域的特征提取器和分类器,从而大大缩小了上述域间隙。结果,这些低保真辅助示例样本具有适中替代原始示例样品的能力,其内存成本较低。此外,我们提出了一种强大的分类器适应方案,该方案进一步完善了有偏见的分类器(使用包含关于旧类的蒸馏标签知识的样品学习),借助纯真实的类标签的样本。实验结果证明了这项工作对最先进的方法的有效性。
With the memory-resource-limited constraints, class-incremental learning (CIL) usually suffers from the "catastrophic forgetting" problem when updating the joint classification model on the arrival of newly added classes. To cope with the forgetting problem, many CIL methods transfer the knowledge of old classes by preserving some exemplar samples into the size-constrained memory buffer. To utilize the memory buffer more efficiently, we propose to keep more auxiliary low-fidelity exemplar samples rather than the original real high-fidelity exemplar samples. Such a memory-efficient exemplar preserving scheme makes the old-class knowledge transfer more effective. However, the low-fidelity exemplar samples are often distributed in a different domain away from that of the original exemplar samples, that is, a domain shift. To alleviate this problem, we propose a duplet learning scheme that seeks to construct domain-compatible feature extractors and classifiers, which greatly narrows down the above domain gap. As a result, these low-fidelity auxiliary exemplar samples have the ability to moderately replace the original exemplar samples with a lower memory cost. In addition, we present a robust classifier adaptation scheme, which further refines the biased classifier (learned with the samples containing distillation label knowledge about old classes) with the help of the samples of pure true class labels. Experimental results demonstrate the effectiveness of this work against the state-of-the-art approaches.