![](https://img-blog.csdnimg.cn/20201014180756724.png?x-oss-process=image/resize,m_fixed,h_64,w_64)
machine learning
Haiyun_Jin
这个作者很懒,什么都没留下…
展开
-
Overview Of MNAR Matrix Completion Under Nuclear Norm assumption
Why Use Propensity ScoreSetupSet up three matrices:signol matrix S∈Rm,nS \in \mathbb{R}^{m,n}S∈Rm,n.noise matrix W∈Rm,nW \in \mathbb{R}^{m,n}W∈Rm,n.probability matrix P∈[0,1]m,nP \in [0, 1]^{m,n...原创 2020-03-26 22:55:38 · 262 阅读 · 0 评论 -
Norms in Matrix and Vector
Norms in VectorP-norm∣∣V∣∣P=(∑i=1n∣Vi∣p)1p||V||_P = (\sum_{i=1}^{n} |V_i|^p)^{\frac{1}{p}}∣∣V∣∣P=(i=1∑n∣Vi∣p)p1Frobenius Norm∣∣V∣∣F=(∑i=1n∣Vi∣2)12||V||_F = (\sum_{i=1}^{n} |V_i|^2)^{\frac{1}{2...原创 2020-03-25 05:44:46 · 156 阅读 · 0 评论 -
SVD vs PCA vs 1bitMC
SVD vs PCA vs 1bitMCEigenDecompositionPCASVDEigenDecompositionFor any real symmetric square d×dd \times dd×d matrix A , we can find its eigenvalues λ1≥λ2≥...≥λd\lambda_1 \ge \lambda_2 \ge ... \g...原创 2020-03-25 04:51:38 · 208 阅读 · 0 评论 -
Gibbs is special case of Metropolis
Although they appear quite different, Gibbs sampling is a special case of the Metropolis-Hasting algorithmSpecifically, Gibbs sampling involves a proposal from the full conditional distribution, whic...原创 2019-04-29 21:23:24 · 101 阅读 · 0 评论 -
EM and Variational Inference Derivation
https://chrischoy.github.io/research/Expectation-Maximization-and-Variational-Inference/转载 2019-05-04 23:49:10 · 157 阅读 · 0 评论 -
Why does Markov Matrix contain eigenvalue=1 and eigenvalues less than or equa to1?
The intuition is that either the arbitrary vector v⃗\vec{v}v or the matrix PPP preserve the probability property that the sume of entries v⃗\vec{v}v is 1 or the sum of each row in PPP is 1.So that su...原创 2019-04-29 00:16:11 · 225 阅读 · 0 评论 -
Information Theory: Self-Information, Entropy, Relative entropy,Cross entropy, Conditinal Entropy
Self-Information: I(x)=log1P(x)I(x) = \log \frac{1}{P(x)}I(x)=logP(x)1Entropy :H(X)=E[I(X)]=E(log1P(X))=∑x∈XP(x)log1P(x)H(X)=E[I(X)] =E(\log \frac{1}{P(X)})=\sum_{x \in X} P(x)\log \frac{1}{P(x)}...原创 2019-04-13 02:17:47 · 199 阅读 · 0 评论 -
Ridge Linear Regression Estimation Invertible
Reference: https://math.stackexchange.com/questions/2447060/prove-that-the-regularization-term-in-rls-makes-the-matrix-invertibleθ^=(XTX+λI)−1Xy\hat{\theta} = (X^TX + \lambda I)^{-1}Xyθ^=(XTX+λI)−1Xy...原创 2019-04-10 22:58:21 · 351 阅读 · 0 评论 -
Bias-Variance+Noise Decomposition in Linear Regression
Model:y=F(x)+vF(x) 在这里可以看做的oracle model,不随training data的改变而改变。\begin{aligned}&y = F(\mathbf{x}) + v\\&\text{$F(\mathbf{x})$ 在这里可以看做的oracle model,不随training data的改变而改...原创 2019-03-24 04:06:22 · 152 阅读 · 0 评论