论文标题
改善了通过钢化混合的开放式输入的鲁棒性
Improved Robustness to Open Set Inputs via Tempered Mixup
论文作者
论文摘要
监督分类方法通常假定评估数据是从与培训数据相同的分布中绘制的,并且所有类都供培训。但是,现实世界中的分类器必须处理远离培训分布的输入,包括未知类别的样本。开放设置的鲁棒性是指适当地将样本从以前看不见的类别标记为新颖的能力,并避免高信任,不正确的预测。现有的方法集中在新颖的推理方法,独特的训练架构或使用其他背景样本中补充培训数据上。在这里,我们提出了一种简单的正则化技术,可以轻松应用于现有的卷积神经网络体系结构,该技术可以改善无背景数据集的开放式稳健性。我们的方法在开放式分类基线上实现了最先进的结果,并轻松扩展到大规模开放式分类问题。
Supervised classification methods often assume that evaluation data is drawn from the same distribution as training data and that all classes are present for training. However, real-world classifiers must handle inputs that are far from the training distribution including samples from unknown classes. Open set robustness refers to the ability to properly label samples from previously unseen categories as novel and avoid high-confidence, incorrect predictions. Existing approaches have focused on either novel inference methods, unique training architectures, or supplementing the training data with additional background samples. Here, we propose a simple regularization technique easily applied to existing convolutional neural network architectures that improves open set robustness without a background dataset. Our method achieves state-of-the-art results on open set classification baselines and easily scales to large-scale open set classification problems.