论文标题

LegOnet:快速而精确的架构

LegoNet: A Fast and Exact Unlearning Architecture

论文作者

Yu, Sihao, Sun, Fei, Guo, Jiafeng, Zhang, Ruqing, Cheng, Xueqi

论文摘要

Machine Unerning旨在消除特定培训样本对受过训练模型的删除请求的影响。由于大量模型参数和重新训练样本,在删除后重新训练模型是一种有效但不是有效的方法。为了加快速度,一种自然的方法是减少此类参数和样本。但是,这种策略通常会导致模型性能的损失,这构成了挑战,即在保持可接受的性能的同时提高了学习效率。在本文中,我们提出了一个新颖的网络,即\ textit {legonet},该网络采用了``固定的编码器 +多个适配器''的框架。我们修复了legoNet的编码器〜(\ ie表示表示的骨干),以减少在学习过程中需要重新训练的参数。由于编码器占据了模型参数的主要部分,因此未学习效率得到显着提高。但是,固定编码器从经验上导致大量的性能下降。为了弥补性能损失,我们采用了多个适配器的集合,这些集合是通过编码〜(\ ie编码器的输出)来推断预测的独立子模型。此外,我们为适配器设计了一种激活机制,以进一步取消与模型性能相对于模型性能的效率。该机制确保每个样品只能影响很少的适配器,因此在未学习期间,需要重新训练的参数和样本都会减少。经验实验验证了Legonet在保持可接受的性能的同时快速而精确地学习,综合表现优于未学习的基线。

Machine unlearning aims to erase the impact of specific training samples upon deleted requests from a trained model. Re-training the model on the retained data after deletion is an effective but not efficient way due to the huge number of model parameters and re-training samples. To speed up, a natural way is to reduce such parameters and samples. However, such a strategy typically leads to a loss in model performance, which poses the challenge that increasing the unlearning efficiency while maintaining acceptable performance. In this paper, we present a novel network, namely \textit{LegoNet}, which adopts the framework of ``fixed encoder + multiple adapters''. We fix the encoder~(\ie the backbone for representation learning) of LegoNet to reduce the parameters that need to be re-trained during unlearning. Since the encoder occupies a major part of the model parameters, the unlearning efficiency is significantly improved. However, fixing the encoder empirically leads to a significant performance drop. To compensate for the performance loss, we adopt the ensemble of multiple adapters, which are independent sub-models adopted to infer the prediction by the encoding~(\ie the output of the encoder). Furthermore, we design an activation mechanism for the adapters to further trade off unlearning efficiency against model performance. This mechanism guarantees that each sample can only impact very few adapters, so during unlearning, parameters and samples that need to be re-trained are both reduced. The empirical experiments verify that LegoNet accomplishes fast and exact unlearning while maintaining acceptable performance, synthetically outperforming unlearning baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源