论文标题
重组对比:学习回归的连续表示
Rank-N-Contrast: Learning Continuous Representations for Regression
论文作者
论文摘要
深度回归模型通常以端到端的方式学习,而不会明确强调回归意识的表示。因此,学习的表示形式表现出碎片化,无法捕获样品顺序的连续性,从而在广泛的回归任务中引起了次优的结果。为了填补空白,我们提出了rank-n-contrast(RNC),该框架通过根据目标空间的排名将样本相互对比,通过将样本相互对比来学习回归的连续表示。我们在理论上和经验上证明了RNC可以根据目标订单保证学习表示的所需顺序,不仅享有更好的绩效,而且可以显着提高鲁棒性,效率和概括性。使用五个现实世界回归数据集进行了广泛的实验,这些数据集跨越了计算机视觉,人类计算机交互和医疗保健验证RNC可以实现最先进的性能,从而突出了其有趣的属性,包括更好的数据效率,对虚假目标和数据腐败的稳健性,以及对分布变化的普遍化。代码可在以下网址获得:https://github.com/kaiwenzha/rank-n-contrast。
Deep regression models typically learn in an end-to-end fashion without explicitly emphasizing a regression-aware representation. Consequently, the learned representations exhibit fragmentation and fail to capture the continuous nature of sample orders, inducing suboptimal results across a wide range of regression tasks. To fill the gap, we propose Rank-N-Contrast (RNC), a framework that learns continuous representations for regression by contrasting samples against each other based on their rankings in the target space. We demonstrate, theoretically and empirically, that RNC guarantees the desired order of learned representations in accordance with the target orders, enjoying not only better performance but also significantly improved robustness, efficiency, and generalization. Extensive experiments using five real-world regression datasets that span computer vision, human-computer interaction, and healthcare verify that RNC achieves state-of-the-art performance, highlighting its intriguing properties including better data efficiency, robustness to spurious targets and data corruptions, and generalization to distribution shifts. Code is available at: https://github.com/kaiwenzha/Rank-N-Contrast.