论文标题
连续的深击检测基准:数据集,方法和必需品
A Continual Deepfake Detection Benchmark: Dataset, Methods, and Essentials
论文作者
论文摘要
有许多基准和技术来检测深层效果。但是,很少有作品研究在现实世界中发现逐渐出现深击的检测。为了模拟野生场景,本文提出了连续的深层检测基准(CDDB),这些基准(CDDB)是来自已知和未知生成模型的新的深击集。建议的CDDB通过一组适当的措施设计了对易于,硬质和长序列的检测的多次评估。此外,我们利用多种方法来适应连续视觉识别中通常使用的多类增量学习方法,以适应不断的深膜检测问题。我们在拟议的CDDB上评估了现有方法,包括其改编的方法。在拟议的基准中,我们探讨了标准持续学习的一些众所周知的必需品。我们的研究在连续的深泡检测背景下就这些基本要素提供了新的见解。建议的CDDB显然比现有基准更具挑战性,因此,这为未来研究提供了合适的评估途径。数据和代码均可在https://github.com/coral79/cddb上获得。
There have been emerging a number of benchmarks and techniques for the detection of deepfakes. However, very few works study the detection of incrementally appearing deepfakes in the real-world scenarios. To simulate the wild scenes, this paper suggests a continual deepfake detection benchmark (CDDB) over a new collection of deepfakes from both known and unknown generative models. The suggested CDDB designs multiple evaluations on the detection over easy, hard, and long sequence of deepfake tasks, with a set of appropriate measures. In addition, we exploit multiple approaches to adapt multiclass incremental learning methods, commonly used in the continual visual recognition, to the continual deepfake detection problem. We evaluate existing methods, including their adapted ones, on the proposed CDDB. Within the proposed benchmark, we explore some commonly known essentials of standard continual learning. Our study provides new insights on these essentials in the context of continual deepfake detection. The suggested CDDB is clearly more challenging than the existing benchmarks, which thus offers a suitable evaluation avenue to the future research. Both data and code are available at https://github.com/Coral79/CDDB.