论文标题
教学位置:注意相似性知识蒸馏低分辨率面部识别
Teaching Where to Look: Attention Similarity Knowledge Distillation for Low Resolution Face Recognition
论文作者
论文摘要
深度学习在面部识别基准方面取得了出色的性能,但是对于低分辨率(LR)图像,性能大大降低了。我们提出了一种注意力相似性知识蒸馏方法,该方法将从高分辨率(HR)网络作为教师获得的注意力图转移到LR网络中,以提高LR识别性能。灵感来自于人类能够根据从HR图像获得的先验知识从LR图像近似物体区域的启发,我们设计了使用余弦相似性的知识蒸馏损失,以使学生网络的注意力类似于教师网络的注意。在各种LR面部相关的基准上进行的实验证实了所提出的方法通常改善了LR设置上的识别性能,通过简单地传输构建良好的注意力图来优于最先进的结果。 https://github.com/gist-ailab/teaching-where-where-to-look在https://github.com/github.com/gith-where-where-look中公开可用。
Deep learning has achieved outstanding performance for face recognition benchmarks, but performance reduces significantly for low resolution (LR) images. We propose an attention similarity knowledge distillation approach, which transfers attention maps obtained from a high resolution (HR) network as a teacher into an LR network as a student to boost LR recognition performance. Inspired by humans being able to approximate an object's region from an LR image based on prior knowledge obtained from HR images, we designed the knowledge distillation loss using the cosine similarity to make the student network's attention resemble the teacher network's attention. Experiments on various LR face related benchmarks confirmed the proposed method generally improved recognition performances on LR settings, outperforming state-of-the-art results by simply transferring well-constructed attention maps. The code and pretrained models are publicly available in the https://github.com/gist-ailab/teaching-where-to-look.