python计算特征值和特征向量,Python vs Matlab中的特征值和特征向量

I have noticed there is a difference between how matlab calculates the eigenvalue and eigenvector of a matrix, where matlab returns the real valued while numpy's return the complex valued eigen valus and vector. For example:

for matrix:

A=

1 -3 3

3 -5 3

6 -6 4

Numpy:

w, v = np.linalg.eig(A)

w

array([ 4. +0.00000000e+00j, -2. +1.10465796e-15j, -2. -1.10465796e-15j])

v

array([[-0.40824829+0.j , 0.24400118-0.40702229j,

0.24400118+0.40702229j],

[-0.40824829+0.j , -0.41621909-0.40702229j,

-0.41621909+0.40702229j],

[-0.81649658+0.j , -0.66022027+0.j , -0.66022027-0.j ]])

Matlab:

[E, D] = eig(A)

E

-0.4082 -0.8103 0.1933

-0.4082 -0.3185 -0.5904

-0.8165 0.4918 -0.7836

D

4.0000 0 0

0 -2.0000 0

0 0 -2.0000

Is there a way of getting the real eigen values in python as it is in matlab?

解决方案

To get NumPy to return a diagonal array of real eigenvalues when the complex part is small, you could use

In [116]: np.real_if_close(np.diag(w))

Out[116]:

array([[ 4., 0., 0.],

[ 0., -2., 0.],

[ 0., 0., -2.]])

According to the Matlab docs,

[E, D] = eig(A) returns E and D which satisfy A*E = E*D:

I don't have Matlab, so I'll use Octave to check the result you posted:

octave:1> A = [[1, -3, 3],

[3, -5, 3],

[6, -6, 4]]

octave:6> E = [[ -0.4082, -0.8103, 0.1933],

[ -0.4082, -0.3185, -0.5904],

[ -0.8165, 0.4918, -0.7836]]

octave:25> D = [[4.0000, 0, 0],

[0, -2.0000, 0],

[0, 0, -2.0000]]

octave:29> abs(A*E - E*D)

ans =

3.0000e-04 0.0000e+00 3.0000e-04

3.0000e-04 2.2204e-16 3.0000e-04

0.0000e+00 4.4409e-16 6.0000e-04

The magnitude of the errors is mainly due to the values reported by Matlab being

displayed to a lower precision than the actual values Matlab holds in memory.

In NumPy, w, v = np.linalg.eig(A) returns w and v which satisfy

np.dot(A, v) = np.dot(v, np.diag(w)):

In [113]: w, v = np.linalg.eig(A)

In [135]: np.set_printoptions(formatter={'complex_kind': '{:+15.5f}'.format})

In [136]: v

Out[136]:

array([[-0.40825+0.00000j, +0.24400-0.40702j, +0.24400+0.40702j],

[-0.40825+0.00000j, -0.41622-0.40702j, -0.41622+0.40702j],

[-0.81650+0.00000j, -0.66022+0.00000j, -0.66022-0.00000j]])

In [116]: np.real_if_close(np.diag(w))

Out[116]:

array([[ 4., 0., 0.],

[ 0., -2., 0.],

[ 0., 0., -2.]])

In [112]: np.abs((np.dot(A, v) - np.dot(v, np.diag(w))))

Out[112]:

array([[4.44089210e-16, 3.72380123e-16, 3.72380123e-16],

[2.22044605e-16, 4.00296604e-16, 4.00296604e-16],

[8.88178420e-16, 1.36245817e-15, 1.36245817e-15]])

In [162]: np.abs((np.dot(A, v) - np.dot(v, np.diag(w)))).max()

Out[162]: 1.3624581677742195e-15

In [109]: np.isclose(np.dot(A, v), np.dot(v, np.diag(w))).all()

Out[109]: True

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值