目录
1.Fpmc
Factorizing Personalized Markov Chains for Next-Basket Recommendation:WWW 2010
模型结构:
核心代码:
优势:
问题:
2.HRM
Learning Hierarchical Representation Model for Next Basket Recommendation:SIGIR 2015
模型结构:
核心代码:
优势:
问题:
3.Gru2rec
Improved Recurrent Neural Networks for Session-based Recommendations: DLRS 2016
模型结构:
核心代码:
优势:
问题:
4.Gru2recf
Parallel Recurrent Neural Network Architectures for Feature-rich Session-based Recommendations: RecSys 2016
模型结构:
核心代码:
优势:
问题:
5.Gru2reckg
It is an extension of GRU4Rec, which concatenates item and its corresponding pre-trained knowledge graph embedding feature as the input
模型结构:
核心代码:
优势:
问题:
6.Transrec
Translation-based Recommendation:RecSys 2017
模型结构:
核心代码:
优势:
问题:
7.Narm
Neural Attentive Session-based Recommendation:CIKM 2017
模型结构:
核心代码:
优势:
问题:
8.SASRec
Self-Attentive Sequential Recommendation: ICDM 2018
模型结构:based on Transformer,分为Embedding层、Self-Attention层(多个自注意力机制+(残差连接、LayerNormalization、Dropout)+前馈网络)和预测层。FFN层使用RELU函数加入了非线性能力:
多个自注意力之间叠加,以学习更复杂的特征转换:
核心代码:
def forward(self, item_seq, item_seq_len):
position_ids = torch.arange(item_seq.size(1), dtype=torch.long, device=item_seq.device)
position_ids = position_ids.unsqueeze(0).expand_as(item_seq)
position_embedding = self.position_embedding(position_ids)
item_emb = self.item_embedding(item_seq)
input_emb = item_emb + position_embedding
input_emb = self.LayerNorm(input_emb)
input_emb = self.dropout(input_emb)
extended_attention_mask = self.get_attention_mask(item_seq)
trm_output = self.trm_encoder(input_emb, extended_attention_mask, output_all_encoded_layers=True)
output = trm_output[-1]
output = self.gather_indexes(output, item_seq_len - 1)
return output # [B H]
优势:最早基于self-attention机制来做序列化推荐的模型,利用多头自注意力机制对用户历史行为建模;
问题:网络层数越深,模型容易过拟合、训练过程不稳定。故加入了残差连接、Layer Normalization 和Dropout来抑制模型过拟合。
9.SASRecf
This is an extension of SASRec, which concatenates item representations and item attribute representations as the input to the model
模型结构:
核心代码:
优势:
问题:
10.Caser
Personalized Top-N Sequential Recommendation via Convolutional Sequence Embedding:WSDM 2018
模型结构:
核心代码:
优势:
问题:
11.DIN
Deep Interest Network for Click-Through Rate Prediction:SIGKDD:2018
模型结构:引入了基于Attention机制的local activation unit模块;
核心代码:
def forward(user, item_seq, item_seq.len):
user_emb = self.attention(target_item_feat_emb, item_feat_list, item_seq_len)
user_emb = user_emb.squeeze()
# input the DNN to get the prediction