MSE加上L2正则项的整体损失total loss公式:
用L2范数可以写成
使用Ridge(L2),Lasso(L1)
import numpy as np
from sklearn.linear_model import Ridge
X = 2*np.random.rand(100,1)
y = 4 + 3*X + np.random.randn(100, 1)
'''
创建Ridge对象,alpha是正则项系数,alpha越大代表越看重模型的泛化能力,越小代表越看重训练集的准确度
solver代表使用那种优化算法,sag是随机梯度下降
'''
ridge_reg = Ridge(alpha=400, solver='sag')
ridge_reg.fit(X, y)
print(ridge_reg.predict([[1.5]]))
# 截距项
print(ridge_reg.intercept_)
# 打印其他系数
print(ridge_reg.coef_)
import numpy as np
from sklearn.linear_model import Ridge
from sklearn.linear_model import SGDRegressor
from sklearn.linear_model import Lasso
X = 2*np.random.rand(100,1)
y = 4 + 3*X + np.random.randn(100, 1)
# 使用梯度下降的方法, penalty代表使用哪种正则项,max_iter代表迭代多少次
lasso_rag = Lasso(alpha=0.15, max_iter=30000)
lasso_rag.fit(X, y)
print(lasso_rag.predict([[1.5]]))
print(lasso_rag.intercept_)
print(lasso_rag.coef_)
使用梯度下降SGD代码实现
import numpy as np
from sklearn.linear_model import Ridge
from sklearn.linear_model import SGDRegressor
X = 2*np.random.rand(100,1)
y = 4 + 3*X + np.random.randn(100, 1)
# 使用梯度下降的方法, penalty代表使用哪种正则项,max_iter代表迭代多少次
sgd_reg = SGDRegressor(penalty='l2', max_iter=1000)
sgd_reg.fit(X, y.reshape(-1,))
print(sgd_reg.predict([[1.5]]))
print(sgd_reg.intercept_)
print(sgd_reg.coef_)