论文标题

联邦学习的知识蒸馏:实用指南

Knowledge Distillation for Federated Learning: a Practical Guide

论文作者

Mora, Alessio, Tenison, Irene, Bellavista, Paolo, Rish, Irina

论文摘要

联合学习(FL)可以对深度学习模型进行培训,而无需集中收集敏感的原始数据。 FL的最常用算法是基于参数平均的方案(例如,联合平均),但是,该方案具有众所周知的限制,即模型同质性,高通信成本,在存在异质数据分布的情况下性能差。定期知识蒸馏(KD)的联合改编可以解决或减轻参数平衡的FL算法的弱点,同时可能引入其他权衡。在本文中,我们最初对基于KD的最先进的算法进行了重点评论,专门针对FL量身定制,通过提供对现有方法的新颖分类以及对其优点,缺点和交易的详细技术描述。

Federated Learning (FL) enables the training of Deep Learning models without centrally collecting possibly sensitive raw data. The most used algorithms for FL are parameter-averaging based schemes (e.g., Federated Averaging) that, however, have well known limits, i.e., model homogeneity, high communication cost, poor performance in presence of heterogeneous data distributions. Federated adaptations of regular Knowledge Distillation (KD) can solve or mitigate the weaknesses of parameter-averaging FL algorithms while possibly introducing other trade-offs. In this article, we originally present a focused review of the state-of-the-art KD-based algorithms specifically tailored for FL, by providing both a novel classification of the existing approaches and a detailed technical description of their pros, cons, and tradeoffs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源