巩固知识,和之前写的那个代码有点区别,但是思想是一样的,都是构建模型,然后前向传播得出损失函数的值,反向传播算出各个参数的梯度,更新权重,再计算损失函数值,等依次重复。
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
from sklearn.datasets import fetch_california_housing
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
house = fetch_california_housing()
x, y = house.data, house.target
'''
print(x.shape, y.shape)
'''
# 分割数据,为后续训练,验证和测试
x_train_all, x_test, y_train_all, y_test = train_test_split(x, y, random_state=1, test_size=0.25)
x_train, x_valid, y_train, y_valid = train_test_split(x_train_all, y_train_all,
random_state=2,
test_size=0.25)
# 数据预处理,应理解为什么要做归一化
scaler = StandardScaler()
x_train_scaled = scaler.fit_transform(x_train)
x_valid_scaled = scaler.transform(x_valid)
x_test_scaled = scaler.transform(x_test)
# 建立模型,并定义好模型的前向传播
model = keras.Sequential()
model.add(keras.layers.Dense(30, activation='relu', input_dim=x_train.shape[-1]))
model.add(keras.layers.Dense(30, activation='relu'))
model.add(keras.layers.Dense(1))
'''
print(model.summary())
print(model.variables)
'''
# 如果涉及到的数据量较大,可分批将数据fit模型中进行训练
epochs = 10
batch_size = 64
steps_per_epoch = x_train.shape[0] // batch_size
learning_rate = 1e-3
optimizer = keras.optimizers.Adam(lr=learning_rate)
# 定义函数,每次随机从样本中抽取batch_size个数据进行训练
def random_batch(x, y, batch_size):
ix = np.random.randint(0, x.shape[0], batch_size)
return x[ix], y[ix]
loss_ls = []
for epoch in range(epochs):
for step in range(steps_per_epoch):
x_batch, y_batch = random_batch(x_train_scaled, y_train, batch_size)
with tf.GradientTape() as tape:
y_pred = model(x_batch, y_batch)
losses = tf.reduce_mean(keras.losses.mean_squared_error(y_batch, y_pred))
loss_ls.apped(losses.numpy())
if (step % 10) == 0:
print('epoch:%d==>step:%d==>loss:%.4f' % (epoch, step, losses))
grads = tape.gradient(losses, model.variables)
optimizer.apply_gradients(zip(grads, model.variables))
plt.figure(figsize=(9, 5))
plt.plot(range(len(loss_ls)), loss_ls)
plt.show()
最终模型进行的图片如下
可以看出刚开始损失函数有较大的波动,但后面下降较快,最后虽然也有较小的波动但是基本稳定在一个较小的数值。这是自己实现的一个简单的神经网络,如果都按照封装好的办法效果应该会更好。这里主要是提供一个思想
。