论文标题

朝着互相向后兼容的表示学习

Towards Universal Backward-Compatible Representation Learning

论文作者

Zhang, Binjie, Ge, Yixiao, Shen, Yantao, Su, Shupeng, Wu, Fanzi, Yuan, Chun, Xu, Xuyuan, Wang, Yexin, Shan, Ying

论文摘要

视觉搜索系统的常规模型升级需要通过将图库图像馈入新型号(称为“回填”),需要脱机画廊功能,这既耗时又昂贵,尤其是在大规模应用中。因此,介绍了向后兼容表示学习的任务,以支持无回填模型升级,其中新的查询功能与旧画廊功能可互操作。尽管取得了成功,但以前的作品仅研究了近距离培训方案(即,新培训集与旧类别相同),并且受到更现实和挑战性的开放式场景的限制。为此,我们首先引入了通用向后兼容表示学习的新问题,涵盖了模型升级中所有可能的数据分配。我们进一步提出了一种简单而有效的方法,该方法将其称为具有新颖的结构原型改进算法的通用后向兼容训练(UNIBCT),以以统一的方式以各种模型升级基准来学习兼容表示。大规模面部识别数据集MS1MV3和IJB-C的全面实验完全证明了我们方法的有效性。

Conventional model upgrades for visual search systems require offline refresh of gallery features by feeding gallery images into new models (dubbed as "backfill"), which is time-consuming and expensive, especially in large-scale applications. The task of backward-compatible representation learning is therefore introduced to support backfill-free model upgrades, where the new query features are interoperable with the old gallery features. Despite the success, previous works only investigated a close-set training scenario (i.e., the new training set shares the same classes as the old one), and are limited by more realistic and challenging open-set scenarios. To this end, we first introduce a new problem of universal backward-compatible representation learning, covering all possible data split in model upgrades. We further propose a simple yet effective method, dubbed as Universal Backward-Compatible Training (UniBCT) with a novel structural prototype refinement algorithm, to learn compatible representations in all kinds of model upgrading benchmarks in a unified manner. Comprehensive experiments on the large-scale face recognition datasets MS1Mv3 and IJB-C fully demonstrate the effectiveness of our method.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源