论文标题
统一的负面对生成朝着良好的歧义特征空间以识别面部
Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition
论文作者
论文摘要
面部识别(FR)的目的可以视为一个对相似性优化问题,最大化相似性集$ \ Mathcal {s}^p $,而不是正对,而最小化相似度集合$ \ Mathcal {s}^n $,而不是负对。理想情况下,预计FR模型会形成一个满足$ \ inf {\ Mathcal {s}^p}> \ sup {\ Mathcal {s s}^n} $的良好的歧义特征空间(WDFS)。关于WDF,可以将现有的深度特征学习范式(即度量和分类损失)表示为不同对生成(PG)策略的统一观点。不幸的是,在度量损失(ML)中,由于迷你批量有限,在每次迭代中考虑所有类别的负面对不可行。相反,在分类损失(CL)中,由于类重量向量向中心的收敛,很难产生极其硬的成对。这导致了采样对的两个相似性分布与所有负对之间的不匹配。因此,本文通过从统一的角度组合了两种PG策略(即MLPG和CLPG),提出了统一的负对生成(UNPG),以减轻不匹配。 UNDG使用MLPG介绍了有关负面对的有用信息,以克服CLPG缺乏症。此外,它包括过滤嘈杂的负面对的相似性,以确保可靠的收敛性和提高性能。详尽的实验表明,通过在公共基准数据集上的最新损失功能中实现最先进的绩效,UNPG的优势。我们的代码和预估计的模型公开可用。
The goal of face recognition (FR) can be viewed as a pair similarity optimization problem, maximizing a similarity set $\mathcal{S}^p$ over positive pairs, while minimizing similarity set $\mathcal{S}^n$ over negative pairs. Ideally, it is expected that FR models form a well-discriminative feature space (WDFS) that satisfies $\inf{\mathcal{S}^p} > \sup{\mathcal{S}^n}$. With regard to WDFS, the existing deep feature learning paradigms (i.e., metric and classification losses) can be expressed as a unified perspective on different pair generation (PG) strategies. Unfortunately, in the metric loss (ML), it is infeasible to generate negative pairs taking all classes into account in each iteration because of the limited mini-batch size. In contrast, in classification loss (CL), it is difficult to generate extremely hard negative pairs owing to the convergence of the class weight vectors to their center. This leads to a mismatch between the two similarity distributions of the sampled pairs and all negative pairs. Thus, this paper proposes a unified negative pair generation (UNPG) by combining two PG strategies (i.e., MLPG and CLPG) from a unified perspective to alleviate the mismatch. UNPG introduces useful information about negative pairs using MLPG to overcome the CLPG deficiency. Moreover, it includes filtering the similarities of noisy negative pairs to guarantee reliable convergence and improved performance. Exhaustive experiments show the superiority of UNPG by achieving state-of-the-art performance across recent loss functions on public benchmark datasets. Our code and pretrained models are publicly available.