论文标题

学习多型驱虫痕迹:用于面部反欺骗的多模式分解网络

Learning Polysemantic Spoof Trace: A Multi-Modal Disentanglement Network for Face Anti-spoofing

论文作者

Li, Kaicheng, Yang, Hongyu, Chen, Binghui, Li, Pengyu, Wang, Biao, Huang, Di

论文摘要

随着面部识别系统的广泛使用,它们的脆弱性也得到了强调。虽然现有的面部反欺骗方法可以在攻击类型之间推广,但由于欺骗特征的多样性,通用解决方案仍然具有挑战性。最近,欺骗痕量脱节框架显示出应对可见和看不见的欺骗场景的巨大潜力,但性能在很大程度上受到单模式输入的限制。本文重点介绍了这个问题,并提出了一个多模式的解开模型,该​​模型旨在学习多义欺骗痕迹,以进行更准确,强大的通用攻击检测。特别是,基于对抗性学习机制,设计了一个两流解散网络,旨在分别从RGB和深度输入中估算欺骗模式。在这种情况下,它捕获了在不同攻击中继承的补充欺骗线索。此外,利用了融合模块,该模块在多个阶段重新校准了两个表示,以促进每种单独的模式中的分离。然后,它执行交叉模式聚集,以提供更全面的欺骗跟踪表示以进行预测。广泛的评估是在多个基准上进行的,这表明学习多性性欺骗痕迹有助于反欺骗,并具有更可感知和可解释的结果。

Along with the widespread use of face recognition systems, their vulnerability has become highlighted. While existing face anti-spoofing methods can be generalized between attack types, generic solutions are still challenging due to the diversity of spoof characteristics. Recently, the spoof trace disentanglement framework has shown great potential for coping with both seen and unseen spoof scenarios, but the performance is largely restricted by the single-modal input. This paper focuses on this issue and presents a multi-modal disentanglement model which targetedly learns polysemantic spoof traces for more accurate and robust generic attack detection. In particular, based on the adversarial learning mechanism, a two-stream disentangling network is designed to estimate spoof patterns from the RGB and depth inputs, respectively. In this case, it captures complementary spoofing clues inhering in different attacks. Furthermore, a fusion module is exploited, which recalibrates both representations at multiple stages to promote the disentanglement in each individual modality. It then performs cross-modality aggregation to deliver a more comprehensive spoof trace representation for prediction. Extensive evaluations are conducted on multiple benchmarks, demonstrating that learning polysemantic spoof traces favorably contributes to anti-spoofing with more perceptible and interpretable results.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源