论文标题

MFFN:用于伪装对象检测的多视图功能融合网络

MFFN: Multi-view Feature Fusion Network for Camouflaged Object Detection

论文作者

Zheng, Dehua, Zheng, Xiaochen, Yang, Laurence T., Gao, Yuan, Zhu, Chenlu, Ruan, Yiheng

论文摘要

关于伪装物体检测(COD)的最新研究旨在分割隐藏在复杂环境中的高度隐藏物体。微小的模糊伪装物体在视觉上无法区分的属性。但是,当前的单视鳕鱼检测器对背景干扰器敏感。因此,伪装对象的模糊边界和可变形状是用单视检测器完全捕获的。为了克服这些障碍,我们提出了一个以行为为灵感的框架,称为多视图特征融合网络(MFFN),该框架模仿了人类在图像中找到模糊对象的行为,即从多个角度,距离,距离,透视观察。具体而言,其背后的关键思想是通过数据增强生成多种观察方式(多视图),并将其应用于输入。 MFFN通过比较和融合提取的多视图功能来捕获关键边界和语义信息。此外,我们的MFFN利用了视图和通道之间的依赖性和相互作用。具体而言,我们的方法通过称为多视图(CAMV)的两个阶段注意模块来利用不同视图之间的互补信息。我们设计了一个名为Channel Fusion单元(CFU)的本地跨越模块,以迭代方式探索各种特征图的渠道上下文线索。实验结果表明,我们的方法通过使用相同数据的培训来对现有的最新方法进行有利的作用。该代码将在https://github.com/dwardzheng/mffn_cod上找到。

Recent research about camouflaged object detection (COD) aims to segment highly concealed objects hidden in complex surroundings. The tiny, fuzzy camouflaged objects result in visually indistinguishable properties. However, current single-view COD detectors are sensitive to background distractors. Therefore, blurred boundaries and variable shapes of the camouflaged objects are challenging to be fully captured with a single-view detector. To overcome these obstacles, we propose a behavior-inspired framework, called Multi-view Feature Fusion Network (MFFN), which mimics the human behaviors of finding indistinct objects in images, i.e., observing from multiple angles, distances, perspectives. Specifically, the key idea behind it is to generate multiple ways of observation (multi-view) by data augmentation and apply them as inputs. MFFN captures critical boundary and semantic information by comparing and fusing extracted multi-view features. In addition, our MFFN exploits the dependence and interaction between views and channels. Specifically, our methods leverage the complementary information between different views through a two-stage attention module called Co-attention of Multi-view (CAMV). And we design a local-overall module called Channel Fusion Unit (CFU) to explore the channel-wise contextual clues of diverse feature maps in an iterative manner. The experiment results show that our method performs favorably against existing state-of-the-art methods via training with the same data. The code will be available at https://github.com/dwardzheng/MFFN_COD.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源