GMF模型
所谓GMF也就是广义的矩阵分解模型。看一下通用框架。
实验中就是把用户(user)和项目(item)用one_hot编码的形式映射为 latent vector维度。所谓广义,就是这个模型可以多种用途,不一定就是处理这一类模型。上代码一看究竟:
class GMF(nn.Module):
def __init__(self, user_num, item_num, factor_num):
super(GMF, self).__init__()
‘’‘
user_num:用户数量
item_num:项目数量
factor_映射维度
’‘’
self.embed_user_GMF = nn.Embedding(user_num, factor_num)
self.embed_item_GMF = nn.Embedding(item_num, factor_num)
self.predict_layer = nn.Linear(factor_num, 1)
self._init_weight_()
def _init_weight_(self):
nn.init.normal_(self.embed_user_GMF.weight, std=0.01)
nn.init.normal_(self.embed_item_GMF.weight, std=0.01)
def forward(self, user, item):
embed_user_GMF = self.embed_user_GMF(user)
embed_item_GMF = self.embed_item_GMF(item)
#GMF部分就是求两个embedding的内积
output_GMF = embed_user_GMF * embed_item_GMF
prediction = self.predict_layer(output_GMF)
return prediction.view(-1)
MLP模型
这里同样是把user,item映射,但是走MLP映射的维度与上面不同。毕竟模型不同。
上代码:
class MLP(nn.Module):
def __init__(self, user_num, item_num, factor_num, num_layers, dropout):
super(MLP, self).__init__()
self.embed_user_MLP = nn.Embedding(user_num, factor_num * (2 ** (num_layers - 1)))
self.embed_item_MLP = nn.Embedding(item_num, factor_num * (2