机器学习:多项式回归、正则化、网格搜索交叉验证

由于线性回归模型并不能处理非线性问题,这里需要用非线性模型,多项式回归应用而生,进行特征扩展,但依然采用的线性回归的方式进行处理。

from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
import numpy as np
import matplotlib.pyplot as plt

x = np.random.uniform(-3,3,size=100)
X = x.reshape(-1,1)
y = 0.5*x**2+x+2+np.random.normal(0,1,size=100)
plt.scatter(x,y)
plt.show()

在这里插入图片描述

x2 = np.hstack([X,X**2])
lin_reg2 = LinearRegression()
lin_reg2.fit(x2,y)
y_ptedict2 = lin_reg2.predict(x2)
plt.scatter(x,y)
#对x的排序处理,对y预测值也根据x的索引变化进行排序
plt.plot(np.sort(x),y_ptedict2[np.argsort(x)],color='r')
plt.show()

在这里插入图片描述

from sklearn.preprocessing import PolynomialFeatures
#interaction_only=False 不会做自身的次方处理
poly = PolynomialFeatures(degree=2,interaction_only=False,include_bias=True)
poly.fit(X)
x2 = poly.transform(X)
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(x2,y)
y_predict = lin_reg.predict(x2)
plt.scatter(x,y)
#对x的排序处理,对y预测值也根据x的索引变化进行排序
plt.plot(np.sort(x),y_predict[np.argsort(x)],color='r')
plt.show()
print(lin_reg2.coef_,lin_reg2.intercept_)
print(lin_reg.coef_,lin_reg.intercept_)

在这里插入图片描述
**

管道处理机制,节省代码量,可以达到同样的效果

**

from sklearn.linear_model import LinearRegression
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import StandardScaler

degree = 2
poly_reg = Pipeline([
    ('poly',PolynomialFeatures(degree=degree)),
    ('std_scaler',StandardScaler()),
    ('lin_reg',LinearRegression())
])

poly_reg.fit(X,y)
y_predict_p = poly_reg.predict(X)
plt.scatter(x,y)
plt.plot(np.sort(x),y_predict_p[np.argsort(x)],color='b')
plt.show()

在这里插入图片描述

1.

欠拟合: 特征值少—升维(特征扩展) 数据量少—增加数据量

2.

过拟合: 特征值多–降维、正则化、筛选特征 数据差异大–数据缩放

from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split

degree1 = 2
poly_reg1 = Pipeline([
    ('poly',PolynomialFeatures(degree=degree1)),
    ('std_scaler',StandardScaler()),
    ('lin_reg',LinearRegression())
])

x_train,x_test,y_train,y_test = train_test_split(X,y,random_state=666)
poly_reg1.fit(x_train,y_train)
y_predict_train = poly_reg1.predict(x_train)
y_predict_test = poly_reg1.predict(x_test)
print(mean_squared_error(y_predict_train,y_train))
print(mean_squared_error(y_predict_test,y_test))

plt.scatter(x,y)
# plt.plot(np.sort(x),y_predict_test[np.argsort(x)],color='b')
plt.show()

degree=2
0.691297017407
0.705731455611

degree=30
0.886211236857
1.14385816223

加入正则化提高模型的泛华能力

L1正则化:用于筛选特征,部分特征值快速为0,经验风险最小化+lambda|w|

L2正则化:处理过拟合,特征值权重趋于0-1之间 经验风险最小化+lambda|w^2|

from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error

np.random.seed(666)
x_train,x_test,y_train,y_test = train_test_split(X,y,random_state=666)

def PolynomialRegression(degree):
    return Pipeline([
        ('poly',PolynomialFeatures(degree=degree)),
        ('std_reg',StandardScaler()),
        ('lin_reg',LinearRegression())
    ])

def plot_model(model):
    x_plot = np.linspace(-3,3,100).reshape(100,1)
    y_plot = model.predict(x_plot)
    plt.scatter(x,y)
    plt.plot(x_plot,y_plot,color='r')
    plt.axis([-3,3,0,6])
    plt.show()
#     print(x_plot,y_plot)
polt_reg = PolynomialRegression(degree=20)
polt_reg.fit(X,y)
plot_model(polt_reg)

多项式回归处理:过拟合显示

在这里插入图片描述

from sklearn.linear_model import Lasso

def LassoRegressin(degree,alpha):
    return Pipeline([
        ('poly',PolynomialFeatures(degree=degree)),
        ('std_reg',StandardScaler()),
        ('lin_reg',Lasso(alpha=alpha))
    ])

ridge1 = LassoRegressin(50,0.5)
ridge1.fit(x_train,y_train)
y_test = ridge1.predict(x_test)
plot_model(ridge1)

加入L1正则显示效果

在这里插入图片描述

from sklearn.linear_model import Ridge

def RidgeRegressin(degree,alpha):
    return Pipeline([
        ('poly',PolynomialFeatures(degree=degree)),
        ('std_reg',StandardScaler()),
        ('lin_reg',Lasso(alpha=alpha))
    ])

ridge2 = RidgeRegressin(20,0.01)
ridge2.fit(x_train,y_train)
y_test = ridge2.predict(x_test)
plot_model(ridge2)

加入L2正则显示效果

在这里插入图片描述

from sklearn.linear_model import ElasticNet

def ElasticNetRegressin(degree,alpha,l1_ratio):
    return Pipeline([
        ('poly',PolynomialFeatures(degree=degree)),
        ('std_reg',StandardScaler()),
        ('lin_reg',ElasticNet(alpha=alpha))
    ])

elasticnet3 = ElasticNetRegressin(20,0.05,0.5)
elasticnet3.fit(x_train,y_train)
y_test = elasticnet3.predict(x_test)
plot_model(elasticnet3)

加入L1、L2显示效果

在这里插入图片描述

网格搜索交叉验证

from sklearn.model_selection import train_test_split

x_train,x_test,y_train,y_test = train_test_split(X,y,random_state=666)
model_1 = Lasso()
model_2 = Ridge()

alpha_can = np.logspace(-3,2,10)

lasso_model =  GridSearchCV(model_1,param_grid={'alpha':alpha_can},cv=5)
lasso_model.fit(x,y)
ridge_model =  GridSearchCV(model_2,param_grid={'alpha':alpha_can},cv=5)
ridge_model.fit(x,y)

print('L1最佳参数:\n',lasso_model.best_params_)
print('L2最佳参数:\n',ridge_model.best_params_)
  • 1
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值