论文标题
Fedgan:用于分布式数据的联合生成对抗网络
FedGAN: Federated Generative Adversarial Networks for Distributed Data
论文作者
论文摘要
我们建议联合生成对抗网络(FedGan)跨越沟通和隐私限制的非独立且相同分布的数据源的分布式来源进行培训。我们的算法使用本地发电机和歧视器,这些发电机和歧视器通过中介机构定期同步,该中介机构平均并广播发电机和鉴别器参数。从理论上讲,我们使用随机近似和沟通有效的随机梯度下降来证明Fedgan与发电机和鉴别器的相等和两个时间尺度更新的收敛。我们在玩具示例(2D系统,混合高斯和瑞士角色),图像数据集(MNIST,CIFAR-10和CELEBA)以及时间序列数据集(家用电力消耗和电动汽车充电会话)上实验。我们显示了Fedgan收敛性,并且具有与一般分布式GAN相似的性能,同时降低了通信复杂性。我们还表明了它的稳健性,可减少通信。
We propose Federated Generative Adversarial Network (FedGAN) for training a GAN across distributed sources of non-independent-and-identically-distributed data sources subject to communication and privacy constraints. Our algorithm uses local generators and discriminators which are periodically synced via an intermediary that averages and broadcasts the generator and discriminator parameters. We theoretically prove the convergence of FedGAN with both equal and two time-scale updates of generator and discriminator, under standard assumptions, using stochastic approximations and communication efficient stochastic gradient descents. We experiment FedGAN on toy examples (2D system, mixed Gaussian, and Swiss role), image datasets (MNIST, CIFAR-10, and CelebA), and time series datasets (household electricity consumption and electric vehicle charging sessions). We show FedGAN converges and has similar performance to general distributed GAN, while reduces communication complexity. We also show its robustness to reduced communications.