Regularization(正则化)
线性回归中的三种形式:
注:我们讨论的线性或者非线性针对的是自变量的系数,而非自变量本身,所以这样的话不管自变量如何变化,自变量的系数如果符合线性我们就说这是线性的。所以这里我们也就可以描述一下多项式线性回归。如:
第一个模型是一个线性模型, 欠拟合,也称为 高偏差,不能很好地适应我们的训练集;第三个模型是一个四次方的模型, 过于强调拟合原始数据,而丢失了算法的本质:若给出一个新的值使之预测,它将表现的很差,是 过拟合,也称为 高方差,虽然能非常好地适应我们的训练集但在新输入变量进行预测时可能会效果不好;而中间的模型似乎最合适。
蓝线为正则化之前的效果,红线为正则化之后的效果。当 λ非常小时,则几乎没有发挥作用。当 λ较大时,会非常大的惩罚参数 θ1,⋯,θn,导致函数拟合为 hθ(x)=θ0,类似于拟合了一条水平的直线,相当于对数据的欠拟合。
L2_regularization代码段:
详细解释看这里
在线性回归的代价函数中,加入正则化项(L2范数),形式如下:
完整代码:
loss = tf.reduce_mean(tf.losses.categorical_crossentropy(y_onehot, out, from_logits=True))
#正则化
loss_regularization = []
for p in network.trainable_variables: #提取参数【w,b】,可以选择要取的参数
loss_regularization.append(tf.nn.l2_loss(p)) #将参数计算l2范数,传入列表中
loss_regularization = tf.reduce_sum(tf.stack(loss_regularization)) #求和
loss = loss + 0.0001 * loss_regularization
import tensorflow as tf
from tensorflow.keras import datasets, layers, optimizers, Sequential, metrics
def preprocess(x, y):
x = tf.cast(x, dtype=tf.float32) / 255.
y = tf.cast(y, dtype=tf.int32)
return x,y
batchsz = 128
(x, y), (x_val, y_val) = datasets.mnist.load_data()
print('datasets:', x.shape, y.shape, x.min(), x.max())
db = tf.data.Dataset.from_tensor_slices((x,y))
db = db.map(preprocess).shuffle(60000).batch(batchsz).repeat(10)
ds_val = tf.data.Dataset.from_tensor_slices((x_val, y_val))
ds_val = ds_val.map(preprocess).batch(batchsz)
network = Sequential([layers.Dense(256, activation='relu'),
layers.Dense(128, activation='relu'),
layers.Dense(64, activation='relu'),
layers.Dense(32, activation='relu'),
layers.Dense(10)])
network.build(input_shape=(None, 28*28))
network.summary()
optimizer = optimizers.Adam(lr=0.01)
for step, (x,y) in enumerate(db):
with tf.GradientTape() as tape:
# [b, 28, 28] => [b, 784]
x = tf.reshape(x, (-1, 28*28))
# [b, 784] => [b, 10]
out = network(x)
# [b] => [b, 10]
y_onehot = tf.one_hot(y, depth=10)
# [b]
loss = tf.reduce_mean(tf.losses.categorical_crossentropy(y_onehot, out, from_logits=True))
#正则化
loss_regularization = []
for p in network.trainable_variables: #提取参数【w,b】,可以选择要取的参数
loss_regularization.append(tf.nn.l2_loss(p)) #将参数计算l2范数,传入列表中
loss_regularization = tf.reduce_sum(tf.stack(loss_regularization)) #求和
loss = loss + 0.0001 * loss_regularization
grads = tape.gradient(loss, network.trainable_variables)
optimizer.apply_gradients(zip(grads, network.trainable_variables))
if step % 100 == 0:
print(step, 'loss:', float(loss), 'loss_regularization:', float(loss_regularization))
# evaluate
if step % 500 == 0:
total, total_correct = 0., 0
for step, (x, y) in enumerate(ds_val):
# [b, 28, 28] => [b, 784]
x = tf.reshape(x, (-1, 28*28))
# [b, 784] => [b, 10]
out = network(x)
# [b, 10] => [b]
pred = tf.argmax(out, axis=1)
pred = tf.cast(pred, dtype=tf.int32)
# bool type
correct = tf.equal(pred, y)
# bool tensor => int tensor => numpy
total_correct += tf.reduce_sum(tf.cast(correct, dtype=tf.int32)).numpy()
total += x.shape[0]
print(step, 'Evaluate Acc:', total_correct/total)