论文标题

联合生成对抗性学习

Federated Generative Adversarial Learning

论文作者

Fan, Chenyou, Liu, Ping

论文摘要

这项工作研究了联合学习环境下的培训生成的对抗网络。生成的对抗网络(GAN)在各种现实世界中都取得了进步,例如图像编辑,样式转移,场景一代等。但是,与其他深度学习模型一样,甘恩在实际情况下也遇到了数据限制问题。为了提高目标任务中gan的性能,从不同来源收集尽可能多的图像不仅很重要,而且也是必不可少的。例如,为了构建一个强大而准确的生物验证系统,可以从监视摄像机中收集大量图像,也可以通过接受协议的用户从手机上载。在理想情况下,利用从公共设备和私人设备上传的所有数据进行模型培训都是直接的。不幸的是,在实际情况下,由于一些原因,这很难。起初,某些数据面临着泄漏的严重关注,因此将其上传到第三方服务器进行模型培训是令人难以置信的。第二,由于各种因素,$ \ textit {e.g。} $,收藏家偏好,地理位置差异,可能具有独特的偏见,可能具有独特的偏见,也称为“域移动”。为了解决这些问题,我们提出了一种利用联合学习框架的新颖生成学习计划。根据联合学习的配置,我们在一个中心和一组客户上进行模型培训和聚合。具体来说,我们的方法学习了客户端中的分布式生成模型,而在每个客户端中训练的模型都融合到中心的一个统一和多功能模型中。我们进行了广泛的实验来比较不同的联邦策略,并在不同级别的并行性和数据偏度下进行经验检查联邦的有效性。

This work studies training generative adversarial networks under the federated learning setting. Generative adversarial networks (GANs) have achieved advancement in various real-world applications, such as image editing, style transfer, scene generations, etc. However, like other deep learning models, GANs are also suffering from data limitation problems in real cases. To boost the performance of GANs in target tasks, collecting images as many as possible from different sources becomes not only important but also essential. For example, to build a robust and accurate bio-metric verification system, huge amounts of images might be collected from surveillance cameras, and/or uploaded from cellphones by users accepting agreements. In an ideal case, utilize all those data uploaded from public and private devices for model training is straightforward. Unfortunately, in the real scenarios, this is hard due to a few reasons. At first, some data face the serious concern of leakage, and therefore it is prohibitive to upload them to a third-party server for model training; at second, the images collected by different kinds of devices, probably have distinctive biases due to various factors, $\textit{e.g.}$, collector preferences, geo-location differences, which is also known as "domain shift". To handle those problems, we propose a novel generative learning scheme utilizing a federated learning framework. Following the configuration of federated learning, we conduct model training and aggregation on one center and a group of clients. Specifically, our method learns the distributed generative models in clients, while the models trained in each client are fused into one unified and versatile model in the center. We perform extensive experiments to compare different federation strategies, and empirically examine the effectiveness of federation under different levels of parallelism and data skewness.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源