论文标题

面部演示攻击检测的样式引导域改编

Style-Guided Domain Adaptation for Face Presentation Attack Detection

论文作者

Kim, Young-Eun, Nam, Woo-Jeoung, Min, Kyungseo, Lee, Seong-Whan

论文摘要

面部表现攻击检测(PAD)的域适应性(DA)或域概括(DG)最近以其对看不见的攻击情景的鲁棒性引起了人们的注意。但是,现有的基于DA/DG的PAD方法尚未完全探索可以提供有关攻击样式知识(例如材料,背景,照明和分辨率)知识的特定领域样式信息。在本文中,我们介绍了一种新型样式引导的域名适应性(SGDA)框架,用于推理时间自适应垫。具体而言,提出了样式选择性标准化(SSN),以探索高阶特征统计中特定于域的样式信息。提出的SSN通过减少目标域和源域之间的样式差异来使模型对目标域的适应。此外,我们仔细设计了风格的元学习(SAML)来提高适应能力,从而在虚拟测试域中使用样式选择过程模拟推理时间适应性。与以前的域适应方法相反,我们的方法不需要其他辅助模型(例如,域适配器)或训练过程中未标记的目标域,这使我们的方法更加实用。为了验证我们的实验,我们使用公共数据集:MSU-MFSD,CASIA-FASD,OULU-NPU和IDIAP REPLAYATTACK。在大多数评估中,与常规的基于DA/DG的PAD方法相比,结果表明性能差距显着。

Domain adaptation (DA) or domain generalization (DG) for face presentation attack detection (PAD) has attracted attention recently with its robustness against unseen attack scenarios. Existing DA/DG-based PAD methods, however, have not yet fully explored the domain-specific style information that can provide knowledge regarding attack styles (e.g., materials, background, illumination and resolution). In this paper, we introduce a novel Style-Guided Domain Adaptation (SGDA) framework for inference-time adaptive PAD. Specifically, Style-Selective Normalization (SSN) is proposed to explore the domain-specific style information within the high-order feature statistics. The proposed SSN enables the adaptation of the model to the target domain by reducing the style difference between the target and the source domains. Moreover, we carefully design Style-Aware Meta-Learning (SAML) to boost the adaptation ability, which simulates the inference-time adaptation with style selection process on virtual test domain. In contrast to previous domain adaptation approaches, our method does not require either additional auxiliary models (e.g., domain adaptors) or the unlabeled target domain during training, which makes our method more practical to PAD task. To verify our experiments, we utilize the public datasets: MSU-MFSD, CASIA-FASD, OULU-NPU and Idiap REPLAYATTACK. In most assessments, the result demonstrates a notable gap of performance compared to the conventional DA/DG-based PAD methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源