Principle Component Analysis
PCA
PCA is to find the dominant directions of the point cloud
Applications:
- Dimensionality reduction
- Surface normal estimation
- Canonical orientation
- Keypoint detection
- Feature description
Physical intuitions:
- Vector Dot Product
- Matrix-Vector Multiplication
- Singular Value Decomposition (SVD)
Spectral Theorem:
A = U Λ U T = ∑ i = 1 n λ i u i u i T , Λ = d i a g ( λ 1 , ⋯ , λ n ) A = U{\Lambda}U^T=\sum_{i=1}^n\lambda_iu_iu^T_i,\Lambda=diag(\lambda_1,\cdots,\lambda_n) A=UΛUT=i=1∑nλiuiuiT,Λ=diag(λ1,⋯,λn)
Rayleigh Quotients:
λ m i n ( A ) ≤ x T A x x T x ≤ λ m a x ( A ) , ∀ x ≠ 0 \lambda_{min}(A)\le\frac{x^TAx}{x^Tx}\le\lambda_{max}(A),\forall{x}\ne0 λmin(A)≤xTxxTAx≤λmax(A),∀x=0
Summary:
- Normalized by the center:
X ~ = [ x ~ 1 , ⋯ , x ~ m ] , x ~ i = x i − x ‾ \tilde{X}=[\tilde{x}_1,\cdots,\tilde{x}_m],\tilde{x}_i=x_i-\overline{x} X~=[x~1,⋯,x~m],x~i=xi−x - Compute SVD:
H = X ~ X ~ T = U r Σ 2 U r T H=\tilde{X}\tilde{X}^T=U_r{\Sigma^2}U_r^T H=X~X~T=UrΣ2UrT - The principle vectors are the columns of Ur (Eigenvector of
X
X
X= Eigenvector of
H
H
H)
KPCA
Input data
x
i
∈
R
n
0
x_i\in\mathcal{R}^{n_0}
xi∈Rn0, non-linear mapping
ϕ
:
R
n
0
→
R
n
1
\phi:\mathcal{R}^{n_0}\rightarrow\mathcal{R}^{n_1}
ϕ:Rn0→Rn1, follow the standard linear PCA
z
~
=
∑
j
=
1
N
α
j
ϕ
(
x
j
)
→
K
α
=
λ
α
\tilde{z}=\sum^N_{j=1}\alpha_j\phi(x_j) \rightarrow K\alpha=\lambda\alpha
z~=j=1∑Nαjϕ(xj)→Kα=λα
The normalization of
z
~
\tilde{z}
z~:
α
r
T
λ
r
α
r
=
1
→
α
r
=
1
/
λ
r
\alpha_r^T\lambda_r\alpha_r=1 \rightarrow \alpha_r=1/\lambda_r
αrTλrαr=1→αr=1/λr
Kernel:
- Linear k ( x i , x j ) = x i T x j k(x_i,x_j)=x_i^Tx_j k(xi,xj)=xiTxj
- Polynomial k ( x i , x j ) = ( 1 + x i T x j ) p k(x_i,x_j)=(1+x_i^Tx_j)^p k(xi,xj)=(1+xiTxj)p
- Gaussian k ( x i , x j ) = e − β ∣ ∣ x i − x j ∣ ∣ 2 k(x_i,x_j)=e^{-\beta{||x_i-x_j||_2}} k(xi,xj)=e−β∣∣xi−xj∣∣2
- Laplacian k ( x i , x j ) = e − β ∣ ∣ x i − x j ∣ ∣ 1 k(x_i,x_j)=e^{-\beta{||x_i-x_j||_1}} k(xi,xj)=e−β∣∣xi−xj∣∣1
Summary
- Select a kernel k ( x i , x j ) k(x_i,x_j) k(xi,xj),compute the Gram matrix K ( i , j ) = k ( x i , x j ) K(i,j)=k(x_i,x_j) K(i,j)=k(xi,xj)
- Normalize
K
K
K:
K ~ = K − 2 II 1 N K + II 1 N K II 1 N \tilde{K}=K-2\textbf{II}_{\frac{1}{N}}K+\textbf{II}_{\frac{1}{N}}K\textbf{II}_{\frac{1}{N}} K~=K−2IIN1K+IIN1KIIN1 - Solve the eigenvector/eigenvalues of
K
~
\tilde{K}
K~:
K ~ α r = λ r α r \tilde{K}\alpha_r=\lambda_r\alpha_r K~αr=λrαr - Normalize α r \alpha_r αr to be α r T α r = 1 λ r \alpha_r^T\alpha_r=\frac{1}{\lambda_r} αrTαr=λr1
- For any data point
x
∈
R
n
x\in{R^n}
x∈Rn, compute its projection onto
r
t
h
r^{th}
rth principle component
y
r
∈
R
y_r\in{R}
yr∈R:
y r = ϕ T ( x ) z ~ r = ∑ j = 1 N α r j k ( x , x j ) y_r=\phi^T(x)\tilde{z}_r=\sum^N_{j=1}\alpha_{rj}k(x,x_j) yr=ϕT(x)z~r=j=1∑Nαrjk(x,xj)
Surface normal on surface
The vector perpendicular to the tangent plane of the surface at a point P
Applications:
- Segmentation/Clustering
- Plane detection
- Point cloud feature for applications like Deep Learning
Steps:
- Select a point P
- Find the neighborhood that defines the surface
- PCA → min n ∈ R n n T X ~ W X ~ T n , s . t . : ∣ ∣ n ∣ ∣ 2 = 1 \rightarrow\min \limits_{n\in{R^n}}n^T\tilde{X}W\tilde{X}^Tn,s.t.:||n||_2=1 →n∈RnminnTX~WX~Tn,s.t.:∣∣n∣∣2=1
- Normal → \rightarrow → the least significant vector
- Curvature → \rightarrow → ratio between eigen values λ 3 / ( λ 1 + λ 2 + λ 3 ) \lambda_3/(\lambda_1+\lambda_2+\lambda_3) λ3/(λ1+λ2+λ3)