论文标题
FEDPC:具有个人和上下文偏好嵌入语言生成的联合学习
FedPC: Federated Learning for Language Generation with Personal and Context Preference Embeddings
论文作者
论文摘要
Federated Learning是一种培训范式,可以从多个分布式用户中学习,而无需在集中式服务器上汇总数据。这样的范式有望将机器学习的能力部署到各种最终用户人群,而无需首先收集大型标签的数据集来完成所有可能的任务。随着联盟学习通常平均在分散的人群中学习更新,因此对联合学习系统的个性化需求越来越多(即对话代理必须能够个性化特定用户的偏好)。在这项工作中,我们为联合学习中的个性化研究提出了一个新的方向,利用个人嵌入和共享的上下文嵌入。我们还提出了一种预测这些``偏好''嵌入的方法,从而实现了个性化,而无需反向传播。与最先进的个性化基线相比,我们的方法使用基线方法所需的0.001 \%的记忆,并实现了更高的样本和计算效率,可以实现50 \%的测试时间困惑。
Federated learning is a training paradigm that learns from multiple distributed users without aggregating data on a centralized server. Such a paradigm promises the ability to deploy machine-learning at-scale to a diverse population of end-users without first collecting a large, labeled dataset for all possible tasks. As federated learning typically averages learning updates across a decentralized population, there is a growing need for personalization of federated learning systems (i.e conversational agents must be able to personalize to a specific user's preferences). In this work, we propose a new direction for personalization research within federated learning, leveraging both personal embeddings and shared context embeddings. We also present an approach to predict these ``preference'' embeddings, enabling personalization without backpropagation. Compared to state-of-the-art personalization baselines, our approach achieves a 50\% improvement in test-time perplexity using 0.001\% of the memory required by baseline approaches, and achieving greater sample- and compute-efficiency.