WWW2021推荐系统论文集锦(附下载)

本文汇总了WWW2021大会上提前发布的推荐系统相关论文,涵盖冷启动问题、GNN推荐、社会化推荐、可解释性推荐、位置推荐、评论推荐、序列推荐及知识图谱推荐等领域。通过TaNP、IMP-GCN等模型,解决推荐系统挑战,如用户冷启动和过平滑问题。此外,提出利用用户反馈改进推荐模型,学习公平表示和自监督多通道超图卷积网络等方法,提升推荐多样性和质量。
摘要由CSDN通过智能技术生成

嘿,记得给“机器学习与推荐算法”添加星标


国际顶级学术会议WWW2021定在2021年4月12-23日举行。受新冠肺炎疫情影响,大会将在线上举行。

今天为大家收集了一些作者提前发布到Arxiv上的推荐系统相关的文章,以此来提前领略大家的最新前沿的想法。其中主要涉及推荐中的冷启动问题、基于GNN的推荐、社会化推荐、可解释性推荐、基于位置的推荐、基于评论的推荐、序列化推荐以及基于知识图谱的推荐系统等

由于检索能力有限,主要收集了12篇推荐系统论文,下面将标题以及摘要奉上,需要的同学自取。另外,文末提供下载方式,可打包下载论文集。

Task-adaptive Neural Process for User Cold-Start Recommendation

User cold-start recommendation is a long-standing challenge for recommender systems due to the fact that only a few interactions of cold-start users can be exploited. Recent studies seek to address this challenge from the perspective of meta learning, and most of them follow a manner of parameter initialization, where the model parameters can be learned by a few steps of gradient updates. While these gradient-based meta-learning models achieve promising performances to some extent, a fundamental problem of them is how to adapt the global knowledge learned from previous tasks for the recommendations of cold-start users more effectively. In this paper, we develop a novel meta-learning recommender called task-adaptive neural process (TaNP). TaNP is a new member of the neural process family, where making recommendations for each user is associated with a corresponding stochastic process. TaNP directly maps the observed interactions of each user to a predictive distribution, sidestepping some training issues in gradient-based meta-learning models. More importantly, to balance the trade-off between model capacity and adaptation reliability, we introduce a novel task-adaptive mechanism. It enables our model to learn the relevance of different tasks and customize the global knowledge to the task-related decoder parameters for estimating user preferences. We validate TaNP on multiple benchmark datasets in different experimental settings. Empirical results demonstrate that TaNP yields consistent improvements over several state-of-the-art meta-learning recommenders.

Interest-aware Message-Passing GCN for Recommendation

Graph Convolution Networks (GCNs) manifest great potential in recommendation. This is attributed to their capability on learning good user and item embeddings by exploiting the collaborative signals from the high-order neighbors. Like other GCN models, the GCN based recommendation models also suffer from the notorious over-smoothing problem - when stacking more layers, node embeddings become more similar and eventually indistinguishable, resulted in performance degradation. The recently proposed LightGCN and LR-GCN alleviate this problem to some extent, however, we argue that they overlook an important factor for the over-smoothing problem in recommendation, that is, high-order neighboring users with no common interests of a user can be also involved in the user's embedding learning in the graph convolution operation. As a result, the multi-layer graph convolution will make users with dissimilar interests have similar embeddings. In this paper, we propose a novel Interest-aware Message-Passing GCN (IMP-GCN) recommendation model, which performs high-order graph convolution inside subgraphs. The subgraph consists of users with similar interests and their interacted items. To form the subgraphs, we design an unsupervised subgraph generation module, which can effectively identify users with common interests by exploiting both user feature and graph structure. To this end, our model can avoid propagating negative information from high-order neighbors into embedding learning. Experimental results on three large-scale benchmark datasets show that our model can gain performance improvement by stacking more layers and outperform the state-of-the-art GCN-based recommendation models significantly.

Random Walks with Erasure: Diversifying Personalized Recommendations on Social and Information Networks

Most existing personalization systems promote items that match a user's previous choices or those that are popular among similar users. This results in recommendations that are highly similar to the ones users are already exposed to, resulting in their isolation inside fam

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值