递归最小二乘实践(代码)

本文来自coursera课程,仅做备份使用,侵删。

关于最小二乘,可以参考这篇文章

Introduction


Given your experience with batch least squares (where all measurements are processed at once), you now decide convert your batch solution to a recursive one for added flexibility. Recall that you have the following data:

Current (A)Voltage (V)
0.21.23
0.31.38
0.42.06
0.52.47
0.63.17

This time, you intend to fit a linear model that includes an offset term, V = R I + b V = RI + b V=RI+b. If Ohm’s law ( V = R I V = RI V=RI) holds, you expect this offset to be very near to zero.

To use the recursive least squares formulation, you must have a prior estimate of the resistance and its associated uncertainty (otherwise, you won’t know how to weigh the information you receive from a new measurement). You choose to set the initial parameters under the assumption that your prior estimate of the resistance, R = 4 R = 4 R=4, is not very good. Also, since you are fairly sure that Ohm’s law ( V = R I V = RI V=RI) does, in fact, hold, you feel that it is safe to assume with high confidence that the offset term b b b is close to zero. After some thought, you choose to intialize the recursive estimator as follows:

R ^ ∼ N ( 4 , 9.0 ) ,    b ^ ∼ N ( 0 , 0.2 ) \hat{R} \sim \mathcal{N}(4, 9.0),~~\hat{b} \sim \mathcal{N}(0, 0.2) R^N(4,9.0),  b^N(0,0.2)

Your initial guess is that R ^ \hat{R} R^ follows a Gaussian or normal distribution (recall that you do not know the exact value of R R R, so it must be considered as a random variable) with a mean of 4   Ω 4~\Omega 4 Ω and a standard deviation of 3   Ω 3~ \Omega 3 Ω (i.e., a variance of 9   Ω 2 9~\Omega^{2} 9 Ω2). Similarly, your intial guess is that b ^ \hat{b} b^ should also follow a normal distribution with a mean of 0   V 0~V 0 V and a variance of 0.2   V 2 0.2~V^{2} 0.2 V2.

With the data again in hand, your goals are to:

  1. Fit a line to the data that includes an offset term (i.e., determine the parameters R R R and b b b for y = R x + b y = Rx + b y=Rx+b) by using the method of recursive least squares.
  2. Reflect on the differences between the batch and recursive least squares solutions.

You may assume that the current values are known exactly, and that the voltage measurements are corrupted by additive, independent and identitically distributed zero-mean Gaussian noise with a standard deviation of 0.15   V 0.15~V 0.15 V (i.e., a variance of 0.0225   V 2 0.0225 ~ V^2 0.0225 V2). You may also assume that your initial estimates for R ^ \hat{R} R^ and b ^ \hat{b} b^ are uncorelated (i.e., the off-diagonal elements of the 2 × 2 2 \times 2 2×2 covariance matrix are zero).

Getting Started


As before, the first step is to import the neccesary Python modules and load the current values and voltage measurements into NumPy arrays:

import numpy as np
from numpy.linalg import inv
import matplotlib.pyplot as plt

I = np.array([[0.2, 0.3, 0.4, 0.5, 0.6]]).T
V = np.array([[1.23, 1.38, 2.06, 2.47, 3.17]]).T

# 绘制散点图
plt.scatter(I, V)
plt.xlabel('Current (A)')
plt.ylabel('Voltage (V)')
plt.grid(True)
plt.show()

Estimating the Slope and Offset Parameters


Batch Estimator

Before implementing the recursive least squares estimator, let’s examine the parameter estimates given by the batch least squares method used in the previous assignment. This time, you will be fitting a model which contains an offset y = R x + b y = Rx + b y=Rx+b. This result can be used later for comparison.


## Batch Solution

H = np.ones((5, 2))
H[:, 0] = I.ravel()
x_ls = inv(H.T.dot(H)).dot(H.T.dot(V))
print('The slope and offset parameters of the best-fit line (i.e., the resistance and offset) are [R, b]:')
print(x_ls[0, 0])
print(x_ls[1, 0])

# Plot line.
I_line = np.arange(0, 0.8, 0.1).reshape(8, 1)
V_line = x_ls[0]*I_line + x_ls[1]

plt.scatter(I, V)
plt.plot(I_line, V_line)
plt.xlabel('Current (A)')
plt.ylabel('Voltage (V)')
plt.grid(True)
plt.show()

结果如下:

The slope and offset parameters of the best-fit line (i.e., the resistance and offset) are [R, b]:
4.970000000000006
0.07399999999999807

As expected, the offset parameter b ^ \hat{b} b^ is near zero, while R ^ \hat{R} R^ closely approximates the true resistance value of R = 5   Ω R = 5~\Omega R=5 Ω.

Recursive Estimator(递归最小二乘估计)

Now let’s try to implement the least squares method recursively! Recall the steps described in Module 1, Lesson 2 - “Recursive Least Squares”:

Initialize the parameter and covariance estimates:


x ^ 0 = E [ x ] , P 0 = E [ ( x − x ^ 0 ) ( x − x ^ 0 ) T ] \hat{\mathbf{x}}_0 = E\left[\mathbf{x}\right],\quad \mathbf{P}_0 = E\left[(\mathbf{x} - \hat{\mathbf{x}}_0)(\mathbf{x} - \hat{\mathbf{x}}_0)^T\right] x^0=E[x],P0=E[(xx^0)(xx^0)T]

For every measurement k:


  • Calculate the gain term: K k = P k − 1 H k T ( H k P k − 1 H k T + R k ) − 1 \mathbf{K}_k = \mathbf{P}_{k-1}\mathbf{H}_k^T\left(\mathbf{H}_k\mathbf{P}_{k-1}\mathbf{H}_k^T + \mathbf{R}_k\right)^{-1} Kk=Pk1HkT(HkPk1HkT+Rk)1
  • Update the parameter estimate: x ^ k = x ^ k − 1 + K k ( y k − H k x ^ k − 1 ) \hat{\mathbf{x}}_k = \hat{\mathbf{x}}_{k-1} + \mathbf{K}_k\left(\mathbf{y}_k - \mathbf{H}_k\hat{\mathbf{x}}_{k-1}\right) x^k=x^k1+Kk(ykHkx^k1)
  • Update the covariance estimate: P k = ( I − K k H k ) P k − 1 \mathbf{P}_k = \left(\mathbf{I} - \mathbf{K}_k\mathbf{H}_k\right)\mathbf{P}_{k-1} Pk=(IKkHk)Pk1

In this case, the initial parameter vector x ^ 0 \hat{\mathbf{x}}_0 x^0 should contain R ^ \hat{R} R^ and b ^ \hat{b} b^.

本质就是用 k − 1 k-1 k1时刻d的估计量估计 k k k时刻的未知量,需要通过计算一个 K k K_k Kk才能得到。
此外还要维护一个待估计量( V = R I + b V = RI + b V=RI+b,本例中为电阻 R R R和偏置 b b b)的协方差矩阵 P P P
这里对维度不太熟悉,因此将矩阵的维度打印了出来。

## Recursive Solution

# Initialize the 2x1 parameter vector x (i.e., x_0).
x_k = np.array([4,0])
print("x_k.shape: ",x_k.shape)

#Initialize the 2x2 covaraince matrix (i.e. P_0). Off-diangonal elements should be zero.
P_k = np.array([[9,0],[0,0.2]])

# Our voltage measurement variance (denoted by R, don't confuse with resistance).
R_k = np.array([[0.0225]])

# Pre allocate space to save our estimates at every step.
num_meas = I.shape[0]
x_hist = np.zeros((num_meas + 1, 2))
P_hist = np.zeros((num_meas + 1, 2, 2))

x_hist[0] = x_k
P_hist[0] = P_k

# Iterate over all the available measurements.
for k in range(num_meas):
    # Construct H_k (Jacobian).
    H_k = H[k,:].reshape((1,-1))    # 1*2
    print("H_k shape: ",H_k.shape)

    # Construct K_k (gain matrix).
    test_a =  P_k.dot(H_k.T).reshape((-1,1))
    print(test_a.shape)
    test_b = inv(H_k@P_k@(H_k.T)+R_k)
    print(test_b.shape)
    K_k = test_a @ test_b   # 2*1x1*1
    print("K_k shape ",K_k.shape)
    
                    
    # Update our estimate.
    x_k = x_k + K_k@(V[k,:] - H_k@x_k)
    print("【Update our estimate.】x_k.shape : ",x_k.shape)
 
    # Update our uncertainty (covariance)
    E = np.eye(2)
    P_k = (E-K_k@H_k) @ P_k   

    # Keep track of our history.
    P_hist[k + 1] = P_k
    x_hist[k + 1] = x_k
    
print('The slope and offset parameters of the best-fit line (i.e., the resistance and offset) are [R, b]:')
# print(x_k[0, 0])
# print(x_k[1, 0])

print(x_k[0])
print(x_k[1])

Plotting the Results


Let’s plot out the solution at each step. Does the resistance value converge towards the batch least squares solution?

plt.scatter(I, V, label='Data')
plt.plot(I_line, V_line, label='Batch Solution')
plt.xlabel('Current (A)')
plt.ylabel('Voltage (V)')
plt.grid(True)

I_line = np.arange(0, 0.8, 0.1).reshape(8, 1)

for k in range(num_meas):
#     V_line = x_hist[k, 0]*I_line + x_hist[k, 1]
    V_line = x_hist[k, 0]*I_line + x_hist[k, 1]

    plt.plot(I_line, V_line, label='Measurement {}'.format(k))

plt.legend()
plt.show()
  • 1
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Tech沉思录

点赞加投币,感谢您的资瓷~

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值