论文标题
分析深度学习网络对“进入野外”的可扩展性
Analysis of the Scalability of a Deep-Learning Network for Steganography "Into the Wild"
论文作者
论文摘要
自从深度学习的出现及其在坚定分析领域中的采用以来,大多数参考文章一直使用中小型CNN,并在相对较小的数据库上学习它们。 因此,在中小型数据库上进行了基准和比较,更精确的CNN之间的基准和比较。这是在不知道的情况下执行的: 1。如果具有准确性等标准的排名在数据库较大时始终相同 2。如果CNN的效率会崩溃,是否是否较大的倍数大的倍数, 3。数据库或CNN所需的最小尺寸,以获得比随机猜测更好的结果。 在本文中,在与CNN的大小和数据库大小的函数有关的扎实讨论之后,我们确认错误的幂律也存在于坚定的静脉分析中,并且在边境情况下,在一个中等大小的网络中,这是一个大型,约束且非常多样化的数据库。
Since the emergence of deep learning and its adoption in steganalysis fields, most of the reference articles kept using small to medium size CNN, and learn them on relatively small databases. Therefore, benchmarks and comparisons between different deep learning-based steganalysis algorithms, more precisely CNNs, are thus made on small to medium databases. This is performed without knowing: 1. if the ranking, with a criterion such as accuracy, is always the same when the database is larger, 2. if the efficiency of CNNs will collapse or not if the training database is a multiple of magnitude larger, 3. the minimum size required for a database or a CNN, in order to obtain a better result than a random guesser. In this paper, after a solid discussion related to the observed behaviour of CNNs as a function of their sizes and the database size, we confirm that the error's power-law also stands in steganalysis, and this in a border case, i.e. with a medium-size network, on a big, constrained and very diverse database.