【推荐系统】MF-OPC与MF-MPC

1.MF-OPC算法

  在矩阵分解MF算法中,我们使用了SVD的思想,将user-item评分矩阵分解为user的潜在兴趣矩阵U和item的潜在特征矩阵V,但对于每个评分 r u i r_{ui} rui的预测,我们只根据用户U和物品I的潜在特征进行计算。
r ^ u i = U u . V i . T + b u + b i + μ \hat r_{ui}=U_{u.}V_{i.}^T+b_u+b_i+\mu r^ui=Uu.Vi.T+bu+bi+μ
  但是用户对物品的评分有时候并不仅仅根据这两个因素,还可能和他评分过的其他物品有关系,将其记为 I u \ { i } I_u\backslash\{i\} Iu\{i},则对于 r u i r_{ui} rui的计算,不仅基于U和I的潜在特征,还考虑了用户评分过的其余商品,其预测规则为
r ^ u i = U u . V i . T + U ˉ u . O P C V i . T + b u + b i + μ \hat r_{ui}= U_{u.}V_{i.}^T+\bar U_{u.}^{OPC}V_{i.}^T+b_u+b_i+\mu r^ui=Uu.Vi.T+Uˉu.OPCVi.T+bu+bi+μ
  其中
U ˉ u . O P C = 1 ∣ I u \ { i } ∣ ∑ i ′ ∈ I u \ { i } O i ′ . \bar U_{u.}^{OPC} =\frac{1}{|I_u\backslash\{i\}|}\sum_{i' \in I_u\backslash\{i\}}O_{i'.} Uˉu.OPC=Iu\{i}1iIu\{i}Oi.
  观察上式,可以发现跟SVD++的模型类似。并且跟SVD++一样,我们考虑了用户的隐式行为,但是没有考虑它的评分,我们称这个模型为MF-OPC(one preference context)我们也可以说MF-OPC算法就是SVD++算法。算法类似,这里就不贴代码了。

2.MF-MPC(附代码)

  在MF-OPC的基础上,将之前没有考虑到的评分也考虑进来,就变成了MF-MPC(multi-class preference context)模型。其预测规则为
r ^ u i = U u . V i . T + U ˉ u . M P C V i . T + b u + b i + μ \hat r_{ui}= U_{u.}V_{i.}^T+\bar U_{u.}^{MPC}V_{i.}^T+b_u+b_i+\mu r^ui=Uu.Vi.T+Uˉu.MPCVi.T+bu+bi+μ
  令S表示分数的集合,如 S = { 1 , 2 , 3 , 4 , 5 } S=\{1,2,3,4,5\} S={1,2,3,4,5},其中
U ˉ u . O P C = ∑ r ∈ S 1 ∣ I u r \ { i } ∣ ∑ i ′ ∈ I u r \ { i } M i ′ . r \bar U_{u.}^{OPC} =\sum_{r \in S}\frac{1}{|I^r_u\backslash\{i\}|}\sum_{i' \in I^r_u\backslash\{i\}}M_{i'.}^r Uˉu.OPC=rSIur\{i}1iIur\{i}Mi.r

  其目标函数为
min ⁡ θ ∑ u = 1 n ∑ i = 1 m y u i [ 1 2 ( r u i − r ^ u i ) 2 + α u 2 ∣ ∣ U u . ∣ ∣ 2 + α v 2 ∣ ∣ v i . ∣ ∣ 2 + β u 2 ∣ ∣ b u ∣ ∣ 2 + β v 2 ∣ ∣ b i ∣ ∣ 2 + α m 2 ∑ r ∈ S ∑ i ′ ∈ I u r \ { i } ∣ ∣ M i ′ . ∣ ∣ 2 ] \min_\theta \sum_{u=1}^n\sum_{i=1}^my_{ui}\left[\frac{1}{2}(r_{ui}-\hat r_{ui})^2+\frac{\alpha_u}{2}||U_{u.}||^2+\frac{\alpha_v}{2}||v_{i.}||^2+\frac{\beta_u}{2}||b_u||^2+\frac{\beta_v}{2}||b_i||^2+\frac{\alpha_m}{2}\sum_{r \in S}\sum_{i' \in I^r_u\backslash\{i\}}||M_{i'.}||^2\right] θminu=1ni=1myui21(ruir^ui)2+2αuUu.2+2αvvi.2+2βubu2+2βvbi2+2αmrSiIur\{i}Mi.2
  令 e u i = r u i − r ^ u i e_{ui}=r_{ui}-\hat r_{ui} eui=ruir^ui,对变量 θ = ( U u . , V i . , b u , b i , M i ′ . r ) \theta=(U_{u.},V_{i.},b_u,b_i,M^r_{i'.}) θ=(Uu.,Vi.,bu,bi,Mi.r)求偏导,有
∇ μ = − e u i ∇ b u = − e u i + β u b u ∇ b i = − e u i + β v b i ∇ U u . = − e u i V i . + α u U u . ∇ V i . = − e u i ( U u . + U ˉ u . M P C ) + α v V i . ∇ m i ′ . r = − e u i 1 ∣ I u r \ { i } ∣ V i . + α m M i ′ . r , i ∈ I u r \ { i } , r ∈ S \begin{aligned} &\nabla_\mu=-e_{ui}\\ &\nabla b_u=-e_{ui}+\beta_ub_u \\ &\nabla b_i= -e_{ui}+\beta_vb_i \\ &\nabla U_{u.}=-e_{ui}V_{i.}+\alpha_uU_{u.} \\ &\nabla V_{i.}=-e_{ui}(U_{u.}+\bar U_{u.}^{MPC})+\alpha_vV_{i.}\\ &\nabla m_{i'.}^r=-e_{ui}\frac{1}{\sqrt{|I_u^r\backslash \{i\}|}}V_{i.}+\alpha_mM_{i'.}^r,i \in I_u^r\backslash \{i\},r\in S \end{aligned} μ=euibu=eui+βububi=eui+βvbiUu.=euiVi.+αuUu.Vi.=eui(Uu.+Uˉu.MPC)+αvVi.mi.r=euiIur\{i} 1Vi.+αmMi.r,iIur\{i},rS

附代码:

import random
import math
import pandas as pd
import numpy as np
import math

class MPC():
    def __init__(self, allfile, trainfile, testfile, latentFactorNum=20,alpha_u=0.01,alpha_v=0.01,alpha_m=0.01,beta_u=0.01,beta_v=0.01,learning_rate=0.01):
        data_fields = ['user_id', 'item_id', 'rating', 'timestamp']
        # all data file
        allData = pd.read_table(allfile, names=data_fields)
        user_list=sorted(set(allData['user_id'].values))
        item_list=sorted(set(allData['item_id'].values))

        # training set file
        self.train_df = pd.read_table(trainfile, names=data_fields)
        # testing set file
        self.test_df=pd.read_table(testfile, names=data_fields)

        self.rateing_score_set = [1,2,3,4,5]
        data_df = pd.DataFrame(index=user_list, columns=item_list)
        rating_matrix=self.train_df.pivot(index='user_id', columns='item_id', values='rating')
        data_df.update(rating_matrix)
        self.rating_matrix=data_df


        # training set file
        #self.train_df = pd.read_table(trainfile, names=data_fields)
        # testing set file
        #self.test_df=pd.read_table(testfile, names=data_fields)
        # get factor number
        self.latentFactorNum = latentFactorNum
        # get user number
        self.userNum = len(set(allData['user_id'].values))
        # get item number
        self.itemNum = len(set(allData['item_id'].values))
        # learning rate
        self.learningRate = learning_rate
        # the regularization lambda
        self.alpha_u=alpha_u
        self.alpha_v=alpha_v
        self.alpha_m=alpha_m
        self.beta_u=beta_u
        self.beta_v=beta_v
        # initialize the model and parameters
        self.initModel()

    # initialize all parameters
    def initModel(self):
        self.mu = self.train_df['rating'].mean()
        self.bu=(self.rating_matrix-self.mu).sum(axis=1)/self.rating_matrix.count(axis=1)
        self.bu=self.bu.values#dataFrame转numpy
        print(self.bu.shape)
        self.bi = (self.rating_matrix - self.mu).sum() / self.rating_matrix.count()
        self.bi = self.bi.values  # dataFrame转numpy
        self.bi[np.isnan(self.bi)]=0 # 填充缺失值

        self.U = np.mat((np.random.rand(self.userNum, self.latentFactorNum)-0.05)*0.01)
        self.V = np.mat((np.random.rand(self.itemNum, self.latentFactorNum)-0.05)*0.01)
        self.M = np.mat((np.random.rand(self.itemNum, self.latentFactorNum)-0.05)*0.01)
        # self.bu = [0.0 for i in range(self.userNum)]
        # self.bi = [0.0 for i in range(self.itemNum)]
        # temp = math.sqrt(self.latentFactorNum)
        # self.U = [[(0.1 * random.random() / temp) for i in range(self.latentFactorNum)] for j in range(self.userNum)]
        # self.V = [[0.1 * random.random() / temp for i in range(self.latentFactorNum)] for j in range(self.itemNum)]

        print("Initialize end.The user number is:%d,item number is:%d" % (self.userNum, self.itemNum))

    def train(self, iterTimes=100):
        print("Beginning to train the model......")
        preRmse = 10000.0
        for iter in range(iterTimes):
            count=0
            for index in self.train_df.index:
                user = int(self.train_df.loc[index]['user_id'])-1
                item = int(self.train_df.loc[index]['item_id'])-1
                rating = float(self.train_df.loc[index]['rating'])
                pscore = self.predictScore(self.mu, self.bu[user], self.bi[item], self.U[user], self.V[item],self.M[item],user+1)
                eui = rating - pscore
                #print(self.mu, self.bu[user], self.bi[item], self.U[user], self.V[item],self.M[item],user+1,eui)
                # update parameters bu and bi(user rating bais and item rating bais)
                self.mu= -eui
                self.bu[user] += self.learningRate * (eui - self.beta_u * self.bu[user])
                self.bi[item] += self.learningRate * (eui - self.beta_v * self.bi[item])

                temp_Uuser = self.U[user]
                temp_Vitem = self.V[item]

                user_id = user + 1
                for score in self.rateing_score_set:
                    temp = self.rating_matrix.loc[user_id][self.rating_matrix.loc[user_id] == score]
                    temp_count = temp.count()
                    if temp_count == 0 :
                        continue
                    U_MPC = self.M[temp.index - 1].sum() / temp_count
                    self.M[item] += self.learningRate * (eui * temp_Vitem / math.sqrt(temp_count) - self.alpha_m * self.M[temp.index-1].sum())

                self.U[user] += self.learningRate * (eui * self.V[user] - self.alpha_u * self.U[user])
                self.V[item] += self.learningRate * ((temp_Uuser+U_MPC) * eui - self.alpha_v * self.V[item])
                # for k in range(self.latentFactorNum):
                #     temp = self.U[user][k]
                #     # update U,V
                #     self.U[user][k] += self.learningRate * (eui * self.V[user][k] - self.alpha_u * self.U[user][k])
                #     self.V[item][k] += self.learningRate * (temp * eui - self.alpha_v * self.V[item][k])
                #
                count += 1
                if count  % 5000 == 0 :
                    print("第%s轮进度:%s/%s" %(iter+1,count,len(self.train_df.index)))
                    # calculate the current rmse
                    curRmse = self.test()
                    print("Iteration %d times,RMSE is : %f" % (iter + 1, curRmse))
                    if curRmse > preRmse:
                        break
                    else:
                        preRmse = curRmse
            self.learningRate = self.learningRate * 0.9 # 缩减学习率
            curRmse = self.test()
            print("Iteration %d times,RMSE is : %f" % (iter + 1, curRmse))
            if curRmse > preRmse:
                break
            else:
                preRmse = curRmse
        print("Iteration finished!")

    # test on the test set and calculate the RMSE
    def test(self):
        cnt = self.test_df.shape[0]
        rmse = 0.0

        # buT=bu.reshape(bu.shape[0],1)
        # predict_rate_matrix = mu + np.tile(buT,(1,self.itemNum))+ np.tile(bi,(self.userNum,1)) +  self.U * self.V.T
        cur = 0
        for i in self.test_df.index:
            cur +=1
            if cur % 1000 == 0:
                print("测试进度:%s/%s" %(cur,len(self.test_df.index)))
            user = int(self.test_df.loc[i]['user_id']) - 1
            item = int(self.test_df.loc[i]['item_id']) - 1
            score = float(self.test_df.loc[i]['rating'])
            pscore = self.predictScore(self.mu,self.bu[user], self.bi[item], self.U[user], self.V[item],self.M[item],user+1)
            # pscore = predict_rate_matrix[user,item]
            rmse += math.pow(score - pscore, 2)
            #print(score,pscore,rmse)
        RMSE=math.sqrt(rmse / cnt)
        return RMSE


    # calculate the inner product of two vectors
    def innerProduct(self, v1, v2):
        result = 0.0
        for i in range(len(v1)):
            result += v1[i] * v2[i]
        return result

    def predictScore(self, mu, bu, bi, U, V, W ,user_id):
        U_MPC = np.zeros(self.latentFactorNum)
        for score in self.rateing_score_set:
            temp = self.rating_matrix.loc[user_id][self.rating_matrix.loc[user_id] == score]
            if temp.count() == 0:
                continue
            U_MPC += self.M[temp.index - 1].sum() / temp.count()
        pscore = mu + bu + bi + np.multiply(U,V).sum() +np.multiply(U_MPC,V).sum()
        if np.isnan(pscore):
            print("!!!!")
            print(mu,bu,bi,np.multiply(U,V).sum(),np.multiply(U_MPC,V).sum(),U_MPC)
        if pscore < 1:
            pscore = 1
        if pscore > 5:
            pscore = 5
        return pscore


if __name__ == '__main__':
    s = MPC("../datasets/ml-100k/u.data", "../datasets/ml-100k/u1.base", "../datasets/ml-100k/u1.test")
    s.train()

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值