论文标题

深入的积极学习和噪声稳定性

Deep Active Learning with Noise Stability

论文作者

Li, Xingjian, Yang, Pengkun, Gu, Yangcheng, Zhan, Xueying, Wang, Tianyang, Xu, Min, Xu, Chengzhong

论文摘要

未标记数据的不确定性估计对于主动学习至关重要。由于模型推断的潜在过度支持,数据选择过程的深度神经网络用作骨干模型。现有方法诉诸特殊学习时尚(例如对抗性)或辅助模型来应对这一挑战。这往往会导致复杂效率低下的管道,这会使方法不切实际。在这项工作中,我们提出了一种新型算法,该算法利用噪声稳定性来估计数据不确定性。关键思想是当模型参数被噪声随机扰动时,测量来自原始观察结果的输出推导。我们通过利用小高斯噪声理论来提供理论分析,并证明我们的方法有利于具有大而多样的梯度的子集。我们的方法通常适用于各种任务,包括计算机视觉,自然语言处理和结构数据分析。与最先进的活跃学习基线相比,它取得了竞争性能。

Uncertainty estimation for unlabeled data is crucial to active learning. With a deep neural network employed as the backbone model, the data selection process is highly challenging due to the potential over-confidence of the model inference. Existing methods resort to special learning fashions (e.g. adversarial) or auxiliary models to address this challenge. This tends to result in complex and inefficient pipelines, which would render the methods impractical. In this work, we propose a novel algorithm that leverages noise stability to estimate data uncertainty. The key idea is to measure the output derivation from the original observation when the model parameters are randomly perturbed by noise. We provide theoretical analyses by leveraging the small Gaussian noise theory and demonstrate that our method favors a subset with large and diverse gradients. Our method is generally applicable in various tasks, including computer vision, natural language processing, and structural data analysis. It achieves competitive performance compared against state-of-the-art active learning baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源