论文标题

通过回归现象的兼容培训在图像检索中升级的热改升升级

Hot-Refresh Model Upgrades with Regression-Alleviating Compatible Training in Image Retrieval

论文作者

Zhang, Binjie, Ge, Yixiao, Shen, Yantao, Li, Yu, Yuan, Chun, Xu, Xuyuan, Wang, Yexin, Shan, Ying

论文摘要

图像检索系统升级热升级模型的任务在该行业中起着至关重要的作用,但以前从未在学术界进行调查。传统的冷淡模型升级只能在整体填充画廊后才部署新型号,需要数周甚至数月才能获得大量数据。相比之下,Hot-Refresh模型立即升级了新模型,然后通过回填画廊,逐渐提高了检索准确性。兼容的培训使模型回归和负面翻转的问题成为可能,这对用户体验的稳定改善构成了巨大的挑战。我们认为,这主要是由于以下事实:与新的负面负对相比,新的较老的积极的查询 - 助理对可能显示出相似性的相似性。为了解决该问题,我们引入了回归掌控训练(RACT)方法,以正确限制特征兼容性,同时降低负面翻转。核心是鼓励新老式的阳性对比新的负面对和新的负负对更相似。进一步引入了有效的基于不确定性的回填策略,以固定准确性的改进。大规模检索基准测试(例如Google Landmark)的广泛实验表明,我们的RACT有效地减轻了模型回归,从而朝着无缝模型升级迈出了一步。该代码将在https://github.com/binjiezhang/ract_iclr2022上找到。

The task of hot-refresh model upgrades of image retrieval systems plays an essential role in the industry but has never been investigated in academia before. Conventional cold-refresh model upgrades can only deploy new models after the gallery is overall backfilled, taking weeks or even months for massive data. In contrast, hot-refresh model upgrades deploy the new model immediately and then gradually improve the retrieval accuracy by backfilling the gallery on-the-fly. Compatible training has made it possible, however, the problem of model regression with negative flips poses a great challenge to the stable improvement of user experience. We argue that it is mainly due to the fact that new-to-old positive query-gallery pairs may show less similarity than new-to-new negative pairs. To solve the problem, we introduce a Regression-Alleviating Compatible Training (RACT) method to properly constrain the feature compatibility while reducing negative flips. The core is to encourage the new-to-old positive pairs to be more similar than both the new-to-old negative pairs and the new-to-new negative pairs. An efficient uncertainty-based backfilling strategy is further introduced to fasten accuracy improvements. Extensive experiments on large-scale retrieval benchmarks (e.g., Google Landmark) demonstrate that our RACT effectively alleviates the model regression for one more step towards seamless model upgrades. The code will be available at https://github.com/binjiezhang/RACT_ICLR2022.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源