论文标题
零击语义分段的上下文感知功能生成
Context-aware Feature Generation for Zero-shot Semantic Segmentation
论文作者
论文摘要
现有的语义细分模型在很大程度上依赖于密集的像素注释。为了减少注释压力,我们专注于一个具有挑战性的任务,名为“零声语义分段”,该任务旨在分割带有零注释的看不见的对象。可以通过通过语义词嵌入跨类别传输知识来完成此任务。在本文中,我们提出了一种新颖的上下文感知特征生成方法,用于零弹性分割,名为Cagnet。特别是,在观察到像素方面的功能高度取决于其上下文信息的观察结果,我们在分割网络中插入上下文模块,以捕获像素上的上下文信息,该信息指导从嵌入语中生成更多样化和上下文感知的特征的过程。我们的方法在三个基准数据集上实现了最新的结果,以进行零弹性分割。代码可在以下网址提供:https://github.com/bcmi/cagnet-zero-hot-semantic-mentication。
Existing semantic segmentation models heavily rely on dense pixel-wise annotations. To reduce the annotation pressure, we focus on a challenging task named zero-shot semantic segmentation, which aims to segment unseen objects with zero annotations. This task can be accomplished by transferring knowledge across categories via semantic word embeddings. In this paper, we propose a novel context-aware feature generation method for zero-shot segmentation named CaGNet. In particular, with the observation that a pixel-wise feature highly depends on its contextual information, we insert a contextual module in a segmentation network to capture the pixel-wise contextual information, which guides the process of generating more diverse and context-aware features from semantic word embeddings. Our method achieves state-of-the-art results on three benchmark datasets for zero-shot segmentation. Codes are available at: https://github.com/bcmi/CaGNet-Zero-Shot-Semantic-Segmentation.