论文标题

注意自我注意事项:意图意识到与动态变压器编码器重新排序以供推荐

Attention over Self-attention:Intention-aware Re-ranking with Dynamic Transformer Encoders for Recommendation

论文作者

Lin, Zhuoyi, Zang, Sheng, Wang, Rundong, Sun, Zhu, Senthilnath, J., Xu, Chi, Kwoh, Chee-Keong

论文摘要

重新排列模型完善了由先前的全球排名模型生成的项目建议列表,该列表证明了它们在改善建议质量方面的有效性。但是,大多数现有的重新排列解决方案仅通过共享的预测模型从隐式反馈中学习,遗憾的是,在不同的用户意图下,该模型忽略了项目间关系。在本文中,我们提出了一个具有动态变压器编码器(RAIS)的新颖意图重新排列模型,旨在根据她的意图对每个用户进行特定于用户的预测。具体来说,我们首先建议从文本评论中挖掘潜在用户意图,并有意图发现模块(IDM)。通过将审查信息的重要性与共同注意网络区分,可以明确地为每个用户项目对建模潜在用户意图。然后,我们引入了动态变压器编码器(DTE),以通过IDM无缝地容纳学习的潜在用户意愿来捕获项目候选者之间的用户特定于项目的关系。因此,人们不仅可以实现更多个性化的建议,而且可以通过在现有建议引擎上构建加薪来获得相应的解释。对四个公共数据集的实证研究表明了我们提出的加薪的优势,最高可通过Precision@5,MAP@5和NDCG@5评估的13.95%,9.60%和13.03%的相对改进。

Re-ranking models refine item recommendation lists generated by the prior global ranking model, which have demonstrated their effectiveness in improving the recommendation quality. However, most existing re-ranking solutions only learn from implicit feedback with a shared prediction model, which regrettably ignore inter-item relationships under diverse user intentions. In this paper, we propose a novel Intention-aware Re-ranking Model with Dynamic Transformer Encoder (RAISE), aiming to perform user-specific prediction for each individual user based on her intentions. Specifically, we first propose to mine latent user intentions from text reviews with an intention discovering module (IDM). By differentiating the importance of review information with a co-attention network, the latent user intention can be explicitly modeled for each user-item pair. We then introduce a dynamic transformer encoder (DTE) to capture user-specific inter-item relationships among item candidates by seamlessly accommodating the learned latent user intentions via IDM. As such, one can not only achieve more personalized recommendations but also obtain corresponding explanations by constructing RAISE upon existing recommendation engines. Empirical study on four public datasets shows the superiority of our proposed RAISE, with up to 13.95%, 9.60%, and 13.03% relative improvements evaluated by Precision@5, MAP@5, and NDCG@5 respectively.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源