SVD分解图像压缩应用英语论文

Image Compression

Contents

1 Introduction

2 The Theoretic Basis of Image Compression

2.1 The Principle of Image Storage
2.2 The Principle of the Sigular Value Decomposition

​ 2.2.1 Eigenvalues and Diagonalization

​ 2.2.2 Symmetric Matrices and Orthogonally Diagonalizable Matrix

​ 2.2.3 Sigular Value Decomposition(SVD)

2.3 The Appication of SVD in Image Compression
2.4 The Appropriate Number of Singular Values

3 Programs

4 Conclusion

5 References

1 Introduction

Image compression is extremely essential for people to store images in the computers with less memory space and load images faster in a certain web page. In fact, most of the images we come across in the Internet are compressed. For example, the formats of jpeg, png and gif are all compressed. Albeit, the image compression will deteriorate the quality of the images inevitably, but some trivial characters are not worth people’s attentions. Thus, comparing a little loss in the quality of the images with the dramatic reduction in the images’ memory space, we prefer the compressed pictures. Furthermore, an optimized compressing algorithm can reduce the image memory space as much as possible with little trivial deterioration, which can even not be discriminated by human’s eyes. As is known to all, computers store images in forms of matrices and each image matches a matrix in computers. Thus, compression can be done via operations on matrices. After some research on this topic, we intend to use the knowledge of singular value decomposition (SVD). By selecting some big singular values and the corresponding left and right singular vectors, we are able to use less data to represent an image with small deterioration.

2 The Theoretic Basis of Image Compression

2.1 The Principle of Image Storage

Computers store images in forms of matrices. For an RGB image, each image matches a three dimensional tensor which can be regarded as a combination of three matrices.

The RGB2

The shapes of these three matrices are the same as the original image.

The RGB1

In computers, there are three integers ranging from 0 to 255 corresponding to each pixel of the image. The values of these three integers represent the property of the pixel in R-channel, G-channel and B-channel respectively. Practically, the integers are normalized to be a float ranging from 0 to 1 at first and each float occupies 1 byte for computers to store.

The RGB3

Thus, for a RGB model picture with m×n pixels, we approximately need 3×m×n byte to store.

2.2 The Principle of the Sigular Value Decomposition

2.2.1 Eigenvalues and Diagonalization
Eigenvalues and Eigenvectors

If a vector ν \nu ν is the eigenvector of the square matrix A A A, we can get the following form:
A ν = λ ν A\nu=\lambda\nu Aν=λν
λ \lambda λ is called the eigenvalue, while the ν \nu ν is called the eigenvector.

Diagonalization

If the n × \times ×n square matrix A A A has n linearly independent eigenvectors, the matrix A A A can be diagonalized, which means that we can get the following form:
P − 1 A P = Λ P^{-1}AP=\Lambda P1AP=Λ
The columns of P P P are the eigenvectors of A A A and the matrix Λ \Lambda Λ is a diagonal matrix with the diagonal entries are eigenvalues of A A A corresponding to the eigenvectors in P P P respectively.

2.2.2 Symmetric Matrices and Orthogonally Diagonalizable Matrix

If A A A is symmetric matrices(namely A = A T A=A^{T} A=AT) , it can be orthogonally diagonalized.

That is, we can get the following form:
Q − 1 A Q = Λ Q^{-1}AQ=\Lambda Q1AQ=Λ
​ (where Q Q Q is a orthogonal matrix ( Q − 1 = Q T ) (Q^{-1}=Q^{T}) (Q1=QT))

or in another form:
A = Q Σ Q − 1 A=Q\Sigma Q^{-1} A=QΣQ1
This is a special kind of eigenvalue decomposition.

2.2.3 Sigular Value Decomposition(SVD)

The eigenvalue decomposition can only be applied to some square matrices, which means that it is limited. So, if we want to get the decomposition of a more general matrix (for example a m × n m\times n m×n matrix, m ≠ n m\neq n m̸=n), we should find another method.

Firstly, we assume that the matrix A can be decomposed into the following form:
A = U m × m ( σ 1   ⋱     σ r       0 ) m × n V n × n T = U Σ V T ( 1 ) A=U_{m\times m} \begin{pmatrix} \sigma_{1}\\ \,&\ddots\\ \,&\,&\sigma_{r}\\ \,&\,&\,&0 \end{pmatrix}_{m\times n} V^{T}_{n\times n} =U\Sigma V^{T}\qquad(1) A=Um×mσ1σr0m×nVn×nT=UΣVT(1)
​ (where U a n d   V U and\,V UandV are all orthonormal matrices)

To prove the existence of this decomposition, we only need to prove the existence of U , Σ   a n d   V T U ,\Sigma\,and\,V^{T} U,ΣandVT.

According to (1), we can find that:
A m × n V n × n = U m × m Σ V n × n T V n × n = U m × m Σ m × n ( 2 ) A_{m\times n}V_{n\times n}=U_{m\times m}\Sigma V^{T}_{n\times n}V_{n\times n}=U_{m\times m}\Sigma_{m\times n}\qquad(2) Am×nVn×n=Um×mΣVn×nTVn×n=Um×mΣm×n(2)

A n × m T = ( U m × m Σ V n × n T ) T = V n × n Σ n × m T U m × m T ( 3 ) A^{T}_{n\times m}=(U_{m\times m}\Sigma V^{T}_{n\times n})^{T}=V_{n\times n}\Sigma^{T}_{n\times m} U^{T}_{m\times m}\qquad(3) An×mT=(Um×mΣVn×nT)T=Vn×nΣn×mTUm×mT(3)

According to (3), we can find that:
A n × m T U m × m = V n × n Σ U m × m T U m × m = V n × n Σ n × m T ( 4 ) A^{T}_{n\times m}U_{m\times m}=V_{n\times n}\Sigma U^{T}_{m\times m}U_{m\times m}=V_{n\times n}\Sigma^{T}_{n\times m}\qquad(4) An×mTUm×m=Vn×nΣUm×mTUm×m=Vn×nΣn×mT(4)
Left multiply (2) by A T A^{T} AT, and left mutiply (4) by A A A, we get :
A n × m T A m × n V n × n = A n × m T U m × m Σ m × n = V n × n Σ n × m T Σ m × n = V n × n Λ n × n ( 5 ) A^{T}_{n\times m}A_{m\times n}V_{n\times n}=A^{T}_{n\times m}U_{m\times m}\Sigma_{m\times n}=V_{n\times n}\Sigma^{T}_{n\times m}\Sigma_{m\times n}=V_{n\times n}\Lambda_{n\times n}\qquad(5) An×mTAm×nVn×n=An×mTUm×mΣm×n=Vn×nΣn×mTΣm×n=Vn×nΛn×n(5)

A m × n A n × m T U m × m = A m × n V n × n Σ n × m T = U m × m Σ m × n Σ n × m T = U m × m Λ m × m ( 6 ) A_{m\times n}A^{T}_{n\times m}U_{m\times m}=A_{m\times n}V_{n\times n}\Sigma^{T}_{n\times m}=U_{m\times m}\Sigma_{m\times n}\Sigma^{T}_{n\times m}=U_{m\times m}\Lambda_{m\times m}\qquad(6) Am×nAn×mTUm×m=Am×nVn×nΣn×mT=Um×mΣm×nΣn×mT=Um×mΛm×m(6)

( w h e r e   t h e   Λ n × n   a n d   Λ m × m   a r e   a l l   d i a g o n a l   m a t r i c e s ) ( ∗ ) (where\ the\ \Lambda_{n\times n}\ and\ \Lambda_{m\times m}\ are\ all\ diagonal\ matrices)^{(*)} (where the Λn×n and Λm×m are all diagonal matrices)()

For the (*), we can see that :
KaTeX parse error: No such environment: align* at position 8: \begin{̲a̲l̲i̲g̲n̲*̲}̲ \Lambda_{m\tim…

KaTeX parse error: No such environment: align* at position 8: \begin{̲a̲l̲i̲g̲n̲*̲}̲ \Lambda_{n\tim…

or in the following form:
A n × m T A m × n = V n × n Λ n × n V n × n − 1 A^{T}_{n\times m}A_{m\times n}=V_{n\times n}\Lambda_{n\times n}V_{n\times n}^{-1} An×mTAm×n=Vn×nΛn×nVn×n1

A m × n A n × m T = U m × m Λ m × m U m × m − 1 A_{m\times n}A^{T}_{n\times m}=U_{m\times m}\Lambda_{m\times m}U_{m\times m}^{-1} Am×nAn×mT=Um×mΛm×mUm×m1

So (5) is the orthogonal diagonalization of A T A A^{T}A ATA, while (6) is the orthogonarof A A T AA^{T} AAT, which means we can diagonalize these two matrices to obtain the matrix $U, V $. For the matrix Σ \Sigma Σ, σ i \sigma_{i} σi is called the sigular value, which is the squareroot of the eigenvalues of A T A   a n d   A A T A^{T}A\ and\ AA^{T} ATA and AAT .

Actually, they are two symmetric matrices based on the general matrix A:
A T A = ( A T A ) T A^{T}A=(A^{T}A)^{T} ATA=(ATA)T

A A T = ( A A T ) T AA^{T}=(AA^{T})^{T} AAT=(AAT)T

So, these two matrices can be orthogonally diagonalized, which means the existence of $\Sigma,U, V $is proved.

2.3 The Appication of SVD in Image Compression

From the 2.1, we have known that any RGB model image can be stored into a 3D tensor with 3 channels. So, we can compress the matrices in each channel and combine them together to get copressed image.

As we have mentioned before, any m×n matrix can be decomposed in the following form:

在这里插入图片描述

We can rewrite the matrix in spectral decomposed form. The singular values can be regarded as weights to different matrices. And Decomposed sub-matrix with bigger singular value has a greater impact on the result and contribute more information. So, when performing singular value decomposition, we prefer to rearrange the sequence of singular values with the bigger one at front. Thus, we just need to select the first k singular values and the corresponding vectors to represent the original image with relatively less deterioration. Suppose that we have an image with m × n pixels. Then we need 3×m×n Bytes to store it. However, after singular value decomposition and selecting the first k singular values, it will only occupy 3 k ( m + n + 1 ) 3k(m+n+1) 3k(m+n+1) Bytes. In fact, k ≪ m   a n d   k ≪ n k\ll m\ and\ k\ll n km and kn, so the storage space for the images is greatly reduced. We set k ( m + n + 1 ) m × n \frac{k(m+n+1)}{m\times n} m×nk(m+n+1) as the compression ratio.

The following images are SVD for different value of k:

在这里插入图片描述

2.4 The Appropriate Number of Singular Values

We have learnt that singular value of a matrix can be regarded as a weight for each decomposed matrices. Decomposed sub-matrix with bigger singular value has a greater impact on the result and contribute more information. After achieving the compression, we want to find out k (the appropriate number of singular value) which can reduce the image memory space as much as possible with trivial deterioration. To obtain the appropriate k, we firstly sketch the graph describing the relationship between singular values and their numbers.
plot
Unfortunately, we find that the graph appears concave up and it decreases quickly at first then becomes slowly. Thus we can not find a certain point to satisfy our goal. However, after compressing several images, we find that if we choose the first k singular values whose summation occupies 70% of the total summation, the difference between compressed images and the original images can hardly be distinguished.
For this 512×512 pixels image, we have 512 singular values. By some simple calculation, we are able to figure out that the summation of the first 53 singular values occupy 70% of the total summation. Compressing the image with k=53, we can get the following result.

contrast
The original image occupy about 78.6kB, after the compressing, it only needs 3.1kB to store.

3 Programs

import numpy as np
import matplotlib.pyplot as plt
from PIL import Image

def svd_decompose(img, s_num):  
    u, s, vt = np.linalg.svd(img)
    h, w = img.shape[:2]
    s1 = np.diag(s[:s_num],0)  
    u1 = np.zeros((h,s_num), float)
    vt1 = np.zeros((s_num,w), float)
    u1[:,:] = u[:,:s_num]
    vt1[:,:] = vt[:s_num,:]
    svd_img=u1.dot(s1).dot(vt1)
    return svd_img

def RGB_decompose(img,s_num):
    original=plt.imread("..\\Image_Compression\\"+img)
    R=svd_decompose(original[:,:,0], s_num)
    G=svd_decompose(original[:,:,1], s_num)
    B=svd_decompose(original[:,:,2], s_num)
    return np.dstack((R,G,B)).astype(int)
    

def main():
    svd_1 = RGB_decompose("lena.jpg", 1)
    svd_5 = RGB_decompose("lena.jpg", 5)
    svd_10 = RGB_decompose("lena.jpg", 10)
    svd_20 = RGB_decompose("lena.jpg", 20)
    svd_50 = RGB_decompose("lena.jpg", 50)
    svd_100 = RGB_decompose("lena.jpg", 100)
    
    plt.figure(num='result',figsize=(12,8),facecolor='pink')
    plt.subplot(2,3,1)
    plt.imshow(svd_1)
    plt.title('1 Singular Values')
    plt.axis('off') 
    
    plt.subplot(2,3,2)
    plt.imshow(svd_5)
    plt.title('5 Singular Values')
    plt.axis('off') 
    
    plt.subplot(2,3,3)
    plt.imshow(svd_10)
    plt.title('10 Singular Values')
    plt.axis('off') 
    
    plt.subplot(2,3,4)
    plt.imshow(svd_20)
    plt.title('20 Singular Values')
    plt.axis('off') 
    
    plt.subplot(2,3,5)
    plt.imshow(svd_50)
    plt.title('50 Singular Values')
    plt.axis('off') 
    
    plt.subplot(2,3,6)
    plt.imshow(svd_100)
    plt.title('100 Singular Values')
    plt.axis('off') 
    plt.show()

if __name__ == '__main__':
    main()

在这里插入图片描述

4 Conclusion

In this article, we talked something like:

1.At the begining, we introduce how an image is stored in computers.

2.Then, we proved the singular value decomposition.

3.Next, we elaborated why SVD can be applied to image compression and tried to find the best number of singular values of SVD.

4.At last, we showed our programming code and some compressed images with different singular values

5 References

[1]. David C. Lay, Linear Algebra and Its Application, University of Maryland

[2]. CSDN Blog The working principle of SVD and application for Python. Link: https://blog.csdn.net/weixin_33843947/article/details/88160723

[3]. CSDN Blog SVD decomposition test for image matrix. Link: https://blog.csdn.net/index20001/article/details/73501632

[4]. CSDN Blog The Principle of image storage. Link:
https://blog.csdn.net/sscout/article/details/82314908

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值