论文标题

自适应网络组合,用于删除单图像反射:域的概括视角

Adaptive Network Combination for Single-Image Reflection Removal: A Domain Generalization Perspective

论文作者

Liu, Ming, Pan, Jianan, Yan, Zifei, Zuo, Wangmeng, Zhang, Lei

论文摘要

最近,已经构建了多个合成和现实世界数据集,以促进对深度单像反射删除(SIRR)模型的训练。同时,还提供了不同类型的反射和场景。但是,训练和测试集之间的不可忽略的域差距使得很难学习深入测试图像的深层模型。反思和场景的多样性进一步使得学习一个对所有测试集和现实世界反思有效的单一模型的使命。在本文中,我们通过从域概括的角度学习SIRR模型来解决这些问题。特别是,对于每个来源集,对特定的SIRR模型进行了培训,可以作为相关反射类型的领域专家。对于给定的反射污染图像,我们提出了反射类型感知权重(RTAW)模块,以预测专家的权重。然后,可以将RTAW与自适应网络组合(ADANEC)合并,以处理不同的反射类型和场景,即对未知域的推广。通过考虑适应水平和效率,提供了两种代表性的ADANEC方法,即(OF)和网络插值(NI)(NI)。对于来自一个源集的图像,我们训练RTAW仅预测其他领域专家的专家权重以提高概括能力,而所有专家的权重预测并在测试过程中使用。为培训RTAW提供了内域专家(IDE)损失。广泛的实验表明,我们的ADANEC在不同最先进的SIRR网络上获得了吸引力的性能。源代码和预培训模型将在https://github.com/csmliu/adanec上找到。

Recently, multiple synthetic and real-world datasets have been built to facilitate the training of deep single image reflection removal (SIRR) models. Meanwhile, diverse testing sets are also provided with different types of reflection and scenes. However, the non-negligible domain gaps between training and testing sets make it difficult to learn deep models generalizing well to testing images. The diversity of reflections and scenes further makes it a mission impossible to learn a single model being effective to all testing sets and real-world reflections. In this paper, we tackle these issues by learning SIRR models from a domain generalization perspective. Particularly, for each source set, a specific SIRR model is trained to serve as a domain expert of relevant reflection types. For a given reflection-contaminated image, we present a reflection type-aware weighting (RTAW) module to predict expert-wise weights. RTAW can then be incorporated with adaptive network combination (AdaNEC) for handling different reflection types and scenes, i.e., generalizing to unknown domains. Two representative AdaNEC methods, i.e., output fusion (OF) and network interpolation (NI), are provided by considering both adaptation levels and efficiency. For images from one source set, we train RTAW to only predict expert-wise weights of other domain experts for improving generalization ability, while the weights of all experts are predicted and employed during testing. An in-domain expert (IDE) loss is presented for training RTAW. Extensive experiments show the appealing performance gain of our AdaNEC on different state-of-the-art SIRR networks. Source code and pre-trained models will available at https://github.com/csmliu/AdaNEC.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源