上篇提到,当次数高时就会出现严重的龙格现象,实际上就是过拟合了,为了降低过拟合的现象,本文使用的模型是Ridge模型,就是在损失函数中加入了L2正则项,函数 表达式如下:
其中生成的数据跟上篇一样,代码如下:
import numpy as np
from sklearn.linear_model import RidgeCV
from sklearn.preprocessing import PolynomialFeatures
import matplotlib.pyplot as plt
from sklearn.pipeline import Pipeline
from sklearn.exceptions import ConvergenceWarning
import matplotlib as mpl
mpl.rcParams['font.sans-serif'] = [u'simHei']
mpl.rcParams['axes.unicode_minus'] = False
import warnings
warnings.filterwarnings(action='ignore', category=ConvergenceWarning)
np.random.seed(0)
np.set_printoptions(linewidth=1000)
N = 7
x = np.linspace(0, 6, N) + 0.5 * np.random.randn(N)
x = np.sort(x)
y = x ** 2 - 4 * x - 3 + 2 * np.random.randn(N)
x.shape = -1, 1 # 将一维转换成二维
y.shape = -1, 1
model = Pipeline([('poly', PolynomialFeatures()), ('linear', RidgeCV(alphas=np.logspace(-3, 3, 50),
fit_intercept=False))])
np.set_printoptions(suppress=True)
plt.figure(figsize=(15, 12), facecolor='w')
d_pool = np.arange(1, N, 1) # 阶
m = d_pool.size
title = 'Ridge回归'
plt.figure(figsize=(10, 8), facecolor='w')
plt.plot(x, y, 'ro', ms=20, zorder=N)
for i, d in enumerate(d_pool):
print()
model.set_params(poly__degree=d)
model.fit(x, y.ravel())
lin = model.get_params('linear')['linear']
output = '%s:%d阶,系数为:' % (title, d)
if hasattr(lin, 'alpha_'):
idx = output.find('系数')
output = output[:idx] + ('alpha=%.6f,' % lin.alpha_) + output[idx:]
print(output, lin.coef_.ravel()) # 权重向量
x_hat = np.linspace(x.min(), x.max(), num=100)
x_hat.shape = -1, 1
y_hat = model.predict(x_hat)
s = model.score(x, y)
print('预测性能得分(R2):', s, '\n')
z = N - 1 if (d == 2) else 0
# z是下面画图中的zorder参数的值,是指该线在图中的级别,数值越大,级别越高,
# 在多线交叉时会显示在最上面,也就是会压住其他的线显示在最前面,这里是设置二阶拟合的级别最高
label = '%d阶,$R^2$=%.3f' % (d, s)
plt.plot(x_hat, y_hat, lw=5, alpha=1, label=label, zorder=z)
plt.legend(loc='upper left', facecolor='w', edgecolor='blue', fontsize='xx-large')
plt.grid(True)
plt.title(title, fontsize=18)
plt.xlabel('X', fontsize=16)
plt.ylabel('Y', fontsize=16)
plt.tight_layout(1, rect=(0, 0, 1, 0.95))
# tight_layout会自动调整子图参数,调整子图之间的间隔来减少堆叠,使之填充整个图像区域。
plt.suptitle('多项式曲线拟合比较', fontsize=22)
plt.show()
输出的结果如下:
Ridge回归:1阶,alpha=0.068665,系数为: [-11.09183404 3.09782902]
预测性能得分(R2): 0.8460600310861124
Ridge回归:2阶,alpha=0.494171,系数为: [-3.42855634 -2.97015753 0.86271087]
预测性能得分(R2): 0.9836296525108054
Ridge回归:3阶,alpha=0.012649,系数为: [-3.32690391 -4.24519224 1.37668969 -0.04999279]
预测性能得分(R2): 0.9894321838180155
Ridge回归:4阶,alpha=0.068665,系数为: [-4.07419894 -2.09967222 -0.12215232 0.30552081 -0.02666889]
预测性能得分(R2): 0.9897035432186699
Ridge回归:5阶,alpha=0.159986,系数为: [-3.59381377 -2.19145326 -0.47405292 0.50132392 -0.06314451 0.00226918]
预测性能得分(R2): 0.9886739475324553
Ridge回归:6阶,alpha=0.372759,系数为: [-3.06256844 -2.08906926 -0.72738309 0.41064889 0.03296081 -0.01864496 0.00139963]
预测性能得分(R2): 0.9855008989656829
图形如下:
如图可见增加了正则项,龙格现象就不出现了,从系数可以看到,越是高阶的系数越小,这就减小了高阶项产生的作用,同时也可以发现,增加了正则项后即使使用六阶的函数也不可能完全通过这些点。