正则化的学习笔记:
本文为初学者笔记,仅作记录用,如有错误还请指正。
正则化:
通俗上来说,正则化是减少过拟合的一个过程。而它的运作原理是在损失函数上加上某些规则(限制),缩小解空间,从而减少求出过拟合解的可能性,例如权重管理(Weight Decay)、数据增强(Data augmentation)、Dropout等。
正则化实现:
(由于机器学习中最常用的为L1正则化和L2正则化,此处仅记录这两种正则化)
1.L1正则化 lasso
np.random.seed(1) # 定义随机数种子
a = 40
X = 2 * np.random.rand(a, 1) # 创建四十个随机点
y = 2 + 1.5 * X + np.random.randn(a, 1)/0.5 # 创建对应关系并加入噪声
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# L1正则化
loss = nn.L1Loss(reduction='sum')
Lasso_1 = Lasso(alpha=1, max_iter=10000)
Lasso_1.fit(X_train, y_train)
y_pr = Lasso_1.predict(X_test)
a = th.tensor(y_pr)
b = th.tensor(y_test)
loss_2 = loss(a, b)
print(loss_2)
2.L2正则化 Ridge
np.random.seed(1) # 定义随机数种子
a = 40
X = 2 * np.random.rand(a, 1) # 创建四十个随机点
y = 2 + 1.5 * X + np.random.randn(a, 1)/0.5 # 创建对应关系并加入噪声
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# L2正则化
loss = nn.L1Loss(reduction='sum')
Lasso_1 = Ridge(alpha=1, max_iter=10000)
Lasso_1.fit(X_train, y_train)
y_pr = Lasso_1.predict(X_test)
a = th.tensor(y_pr)
b = th.tensor(y_test)
loss_2 = loss(a, b)
print(loss_2)
三、参考
https://zhuanlan.zhihu.com/p/67931198
http://t.csdn.cn/YzUPs
https://www.cnblogs.com/yksgzlyh/p/10478087.html