论文标题

长尾识别中的部分和不对称的对比度学习

Partial and Asymmetric Contrastive Learning for Out-of-Distribution Detection in Long-Tailed Recognition

论文作者

Wang, Haotao, Zhang, Aston, Zhu, Yi, Zheng, Shuai, Li, Mu, Smola, Alex, Wang, Zhangyang

论文摘要

现有的分布(OOD)检测方法通常在具有平衡的类分布的培训集中进行基准测试。但是,在现实世界应用中,培训集具有长尾分配是很常见的。在这项工作中,我们首先证明现有的OOD检测方法通常会在训练集分布式分布时遭受重大性能降解。通过分析,我们认为这是因为模型难以区分少数尾巴级分配样本与真实的OOD样本,从而使尾巴类更容易被错误地检测为OOD。为了解决这个问题,我们提出了部分和不对称的监督对比学习(PASCL),该学习明确鼓励该模型区分尾级分配样本和OOD样本。为了进一步提高分布分类的准确性,我们提出了辅助分支列式,该辅助分支芬太尼分别使用BN的两个单独分支和分类层进行异常检测和分布分类。直觉是,分布和OOD异常数据具有不同的基础分布。我们的方法的表现优于先前的最新方法$ 1.29 \%$,$ 1.45 \%$,$ 0.69 \%$ $ $异常检测误报率(FPR)和$ 3.24 \%$,$ 4.06 \%$,$ 7.89 \%$ $ $ 7.89 \%$ $ $ $ $ $,$ 7.89 \%$ $ $ $ $ $ $,$ 7.89 \%$ $ $ $ $ $ $,$ 7.89 \%$ $ $ $ $ $。代码和预训练模型可在https://github.com/amazon-research/long-tailed-ood-detection上找到。

Existing out-of-distribution (OOD) detection methods are typically benchmarked on training sets with balanced class distributions. However, in real-world applications, it is common for the training sets to have long-tailed distributions. In this work, we first demonstrate that existing OOD detection methods commonly suffer from significant performance degradation when the training set is long-tail distributed. Through analysis, we posit that this is because the models struggle to distinguish the minority tail-class in-distribution samples, from the true OOD samples, making the tail classes more prone to be falsely detected as OOD. To solve this problem, we propose Partial and Asymmetric Supervised Contrastive Learning (PASCL), which explicitly encourages the model to distinguish between tail-class in-distribution samples and OOD samples. To further boost in-distribution classification accuracy, we propose Auxiliary Branch Finetuning, which uses two separate branches of BN and classification layers for anomaly detection and in-distribution classification, respectively. The intuition is that in-distribution and OOD anomaly data have different underlying distributions. Our method outperforms previous state-of-the-art method by $1.29\%$, $1.45\%$, $0.69\%$ anomaly detection false positive rate (FPR) and $3.24\%$, $4.06\%$, $7.89\%$ in-distribution classification accuracy on CIFAR10-LT, CIFAR100-LT, and ImageNet-LT, respectively. Code and pre-trained models are available at https://github.com/amazon-research/long-tailed-ood-detection.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源