LFM也就是前面提到的Funk SVD矩阵分解
LFM原理解析
LFM(latent factor model)隐语义模型核心思想是通过隐含特征联系用户和物品,如下图:
- P矩阵是User-LF矩阵,即用户和隐含特征矩阵。LF有三个,表示共总有三个隐含特征。
- Q矩阵是LF-Item矩阵,即隐含特征和物品的矩阵
- R矩阵是User-Item矩阵,有P*Q得来
- 能处理稀疏评分矩阵
利用矩阵分解技术,将原始User-Item的评分矩阵(稠密/稀疏)分解为P和Q矩阵,然后利用 P ∗ Q P*Q P∗Q还原出User-Item评分矩阵 R R R。整个过程相当于降维处理,其中:
-
矩阵值 P 11 P_{11} P11表示用户1对隐含特征1的权重值
-
矩阵值 Q 11 Q_{11} Q11表示隐含特征1在物品1上的权重值
-
矩阵值 R 11 R_{11} R11就表示预测的用户1对物品1的评分,且 R 11 = P 1 , k ⃗ ⋅ Q k , 1 ⃗ R_{11}=\vec{P_{1,k}}\cdot \vec{Q_{k,1}} R11=P1,k⋅Qk,1
利用LFM预测用户对物品的评分, k k k表示隐含特征数量:
r ^ u i = p u k ⃗ ⋅ q i k ⃗ = ∑ k = 1 k p u k q i k \begin{aligned} \hat {r}_{ui} &=\vec {p_{uk}}\cdot \vec {q_{ik}} \\&={\sum_{k=1}}^k p_{uk}q_{ik} \end{aligned} r^ui=puk⋅qik=k=1∑kpukqik
因此最终,我们的目标也就是要求出P矩阵和Q矩阵及其当中的每一个值,然后再对用户-物品的评分进行预测。
损失函数
同样对于评分预测我们利用平方差来构建损失函数:
C
o
s
t
=
∑
u
,
i
∈
R
(
r
u
i
−
r
^
u
i
)
2
=
∑
u
,
i
∈
R
(
r
u
i
−
∑
k
=
1
k
p
u
k
q
i
k
)
2
\begin{aligned} Cost &= \sum_{u,i\in R} (r_{ui}-\hat{r}_{ui})^2 \\&=\sum_{u,i\in R} (r_{ui}-{\sum_{k=1}}^k p_{uk}q_{ik})^2 \end{aligned}
Cost=u,i∈R∑(rui−r^ui)2=u,i∈R∑(rui−k=1∑kpukqik)2
加入L2正则化:
C
o
s
t
=
∑
u
,
i
∈
R
(
r
u
i
−
∑
k
=
1
k
p
u
k
q
i
k
)
2
+
λ
(
∑
U
p
u
k
2
+
∑
I
q
i
k
2
)
Cost = \sum_{u,i\in R} (r_{ui}-{\sum_{k=1}}^k p_{uk}q_{ik})^2 + \lambda(\sum_U{p_{uk}}^2+\sum_I{q_{ik}}^2)
Cost=u,i∈R∑(rui−k=1∑kpukqik)2+λ(U∑puk2+I∑qik2)
对损失函数求偏导:
∂
∂
p
u
k
C
o
s
t
=
∂
∂
p
u
k
[
∑
u
,
i
∈
R
(
r
u
i
−
∑
k
=
1
k
p
u
k
q
i
k
)
2
+
λ
(
∑
U
p
u
k
2
+
∑
I
q
i
k
2
)
]
=
2
∑
u
,
i
∈
R
(
r
u
i
−
∑
k
=
1
k
p
u
k
q
i
k
)
(
−
q
i
k
)
+
2
λ
p
u
k
∂
∂
q
i
k
C
o
s
t
=
∂
∂
q
i
k
[
∑
u
,
i
∈
R
(
r
u
i
−
∑
k
=
1
k
p
u
k
q
i
k
)
2
+
λ
(
∑
U
p
u
k
2
+
∑
I
q
i
k
2
)
]
=
2
∑
u
,
i
∈
R
(
r
u
i
−
∑
k
=
1
k
p
u
k
q
i
k
)
(
−
p
u
k
)
+
2
λ
q
i
k
\begin{aligned} \cfrac {\partial}{\partial p_{uk}}Cost &= \cfrac {\partial}{\partial p_{uk}}[\sum_{u,i\in R} (r_{ui}-{\sum_{k=1}}^k p_{uk}q_{ik})^2 + \lambda(\sum_U{p_{uk}}^2+\sum_I{q_{ik}}^2)] \\&=2\sum_{u,i\in R} (r_{ui}-{\sum_{k=1}}^k p_{uk}q_{ik})(-q_{ik}) + 2\lambda p_{uk} \\\\ \cfrac {\partial}{\partial q_{ik}}Cost &= \cfrac {\partial}{\partial q_{ik}}[\sum_{u,i\in R} (r_{ui}-{\sum_{k=1}}^k p_{uk}q_{ik})^2 + \lambda(\sum_U{p_{uk}}^2+\sum_I{q_{ik}}^2)] \\&=2\sum_{u,i\in R} (r_{ui}-{\sum_{k=1}}^k p_{uk}q_{ik})(-p_{uk}) + 2\lambda q_{ik} \end{aligned}
∂puk∂Cost∂qik∂Cost=∂puk∂[u,i∈R∑(rui−k=1∑kpukqik)2+λ(U∑puk2+I∑qik2)]=2u,i∈R∑(rui−k=1∑kpukqik)(−qik)+2λpuk=∂qik∂[u,i∈R∑(rui−k=1∑kpukqik)2+λ(U∑puk2+I∑qik2)]=2u,i∈R∑(rui−k=1∑kpukqik)(−puk)+2λqik
随机梯度下降法优化
梯度下降更新参数
p
u
k
p_{uk}
puk:
p
u
k
=
p
u
k
−
α
∂
∂
p
u
k
C
o
s
t
=
p
u
k
−
α
[
2
∑
u
,
i
∈
R
(
r
u
i
−
∑
k
=
1
k
p
u
k
q
i
k
)
(
−
q
i
k
)
+
2
λ
p
u
k
]
=
p
u
k
+
α
[
∑
u
,
i
∈
R
(
r
u
i
−
∑
k
=
1
k
p
u
k
q
i
k
)
q
i
k
−
λ
p
u
k
]
\begin{aligned} p_{uk}&=p_{uk} - \alpha\cfrac {\partial}{\partial p_{uk}}Cost \\&=p_{uk}-\alpha [2\sum_{u,i\in R} (r_{ui}-{\sum_{k=1}}^k p_{uk}q_{ik})(-q_{ik}) + 2\lambda p_{uk}] \\&=p_{uk}+\alpha [\sum_{u,i\in R} (r_{ui}-{\sum_{k=1}}^k p_{uk}q_{ik})q_{ik} - \lambda p_{uk}] \end{aligned}
puk=puk−α∂puk∂Cost=puk−α[2u,i∈R∑(rui−k=1∑kpukqik)(−qik)+2λpuk]=puk+α[u,i∈R∑(rui−k=1∑kpukqik)qik−λpuk]
同理:
q
i
k
=
q
i
k
+
α
[
∑
u
,
i
∈
R
(
r
u
i
−
∑
k
=
1
k
p
u
k
q
i
k
)
p
u
k
−
λ
q
i
k
]
\begin{aligned} q_{ik}&=q_{ik} + \alpha[\sum_{u,i\in R} (r_{ui}-{\sum_{k=1}}^k p_{uk}q_{ik})p_{uk} - \lambda q_{ik}] \end{aligned}
qik=qik+α[u,i∈R∑(rui−k=1∑kpukqik)puk−λqik]
随机梯度下降: 向量乘法 每一个分量相乘 求和
p
u
k
=
p
u
k
+
α
[
(
r
u
i
−
∑
k
=
1
k
p
u
k
q
i
k
)
q
i
k
−
λ
1
p
u
k
]
q
i
k
=
q
i
k
+
α
[
(
r
u
i
−
∑
k
=
1
k
p
u
k
q
i
k
)
p
u
k
−
λ
2
q
i
k
]
\begin{aligned} &p_{uk}=p_{uk}+\alpha [(r_{ui}-{\sum_{k=1}}^k p_{uk}q_{ik})q_{ik} - \lambda_1 p_{uk}] \\&q_{ik}=q_{ik} + \alpha[(r_{ui}-{\sum_{k=1}}^k p_{uk}q_{ik})p_{uk} - \lambda_2 q_{ik}] \end{aligned}
puk=puk+α[(rui−k=1∑kpukqik)qik−λ1puk]qik=qik+α[(rui−k=1∑kpukqik)puk−λ2qik]
由于P矩阵和Q矩阵是两个不同的矩阵,通常分别采取不同的正则参数,如
λ
1
\lambda_1
λ1和
λ
2
\lambda_2
λ2