目录(注意本文jupyterlab编写)
预先导入数据
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
查看数据样式
housing=fetch_california_housing()
housing.data.shape,housing.target.shape
((20640, 8), (20640,))
归一化数据,划分数据集为训练集、测试集、验证集
scaler=StandardScaler()
x_data=scaler.fit_transform(housing.data)
x_train_full,x_test,y_train_full,y_test=train_test_split(x_data,housing.target)
x_train,x_valid,y_train,y_valid=train_test_split(x_train_full,y_train_full)
导入tensorflow2模块
import tensorflow as tf
运用函数式API构建tensorflow2模型
注意:tf.keras.layers.Input
为函数
运用函数式构建模型,就和运用函数一样,除了建立输入层,后续的层都以前面层为参数
input_=tf.keras.layers.Input(shape=x_train.shape[1:])
hidden1=tf.keras.layers.Dense(30,activation='elu',kernel_initializer='he_normal')(input_)
hidden2=tf.keras.layers.Dense(30,activation='elu',kernel_initializer='he_normal')(hidden1)
# 注意这里合并了input_和hidden2的输入
concat=tf.keras.layers.Concatenate()([input_,hidden2])
output=tf.keras.layers.Dense(1)(concat)
model=tf.keras.Model(inputs=[input_],outputs=[output])
model.compile(loss=tf.keras.losses.mean_squared_error,
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001,momentum=0.9))
编写tensorboard的初始化名称的函数,构建callbacks
注意:这样每次运行都会产生日期作为日志的名称,运行完后,在当前目录命令行键入tensorboard --logdir=logs/logs_1 --port=6006
,然后启动浏览器,登录localhost:6006,即可查看。
import os
logs_root=os.path.join(os.curdir,'logs')
logs1_root=os.path.join(logs_root,'logs_1')
def get_run_logdir():
import time
run_id=time.strftime('run_%Y_%m_%d-%H_%M_%S')
return os.path.join(logs1_root,run_id)
run_logdir=get_run_logdir()
tensorboard_cb=tf.keras.callbacks.TensorBoard(run_logdir)
开始训练
history=model.fit(x_train,y_train,epochs=20,validation_data=(x_valid,y_valid),callbacks=[tensorboard_cb])
Epoch 1/20
363/363 [==============================] - 2s 4ms/step - loss: 0.7307 - val_loss: 0.5071
Epoch 2/20
363/363 [==============================] - 1s 3ms/step - loss: 0.4633 - val_loss: 0.5007
Epoch 3/20
363/363 [==============================] - 1s 3ms/step - loss: 0.4517 - val_loss: 0.4579
Epoch 4/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4497 - val_loss: 0.4336
Epoch 5/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4274 - val_loss: 0.4481
Epoch 6/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4309 - val_loss: 0.4558
Epoch 7/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4220 - val_loss: 0.4237
Epoch 8/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4082 - val_loss: 0.4114
Epoch 9/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4045 - val_loss: 0.4049
Epoch 10/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4059 - val_loss: 0.3948
Epoch 11/20
363/363 [==============================] - 1s 4ms/step - loss: 0.3985 - val_loss: 0.3989
Epoch 12/20
363/363 [==============================] - 1s 4ms/step - loss: 0.3929 - val_loss: 0.3944
Epoch 13/20
363/363 [==============================] - 1s 4ms/step - loss: 0.3904 - val_loss: 0.3840
Epoch 14/20
363/363 [==============================] - 1s 4ms/step - loss: 0.3872 - val_loss: 0.3864
Epoch 15/20
363/363 [==============================] - 1s 4ms/step - loss: 0.3844 - val_loss: 0.3766
Epoch 16/20
363/363 [==============================] - 1s 4ms/step - loss: 0.3817 - val_loss: 0.4225
Epoch 17/20
363/363 [==============================] - 1s 4ms/step - loss: 0.3852 - val_loss: 0.3803
Epoch 18/20
363/363 [==============================] - 1s 4ms/step - loss: 0.3798 - val_loss: 0.3818
Epoch 19/20
363/363 [==============================] - 1s 4ms/step - loss: 0.3771 - val_loss: 0.3787
Epoch 20/20
363/363 [==============================] - 1s 4ms/step - loss: 0.3754 - val_loss: 0.3653
import pandas as pd
绘制损失曲线图
pd.DataFrame(history.history).plot()
函数式API的另一个尝试
在神经网络中,除了
tf.keras.layers.Concatenate()
类可以承接多个输入,我们也可以自己写个简单的层。这层可以分开计算多个输入,也可以有多个不同的输出(不是像每个多输出网络的输入都是承接上层的所有输出)
tf.keras.layers.Lambda()
可以将简单的lambda创建的函数转化为层
注意:层的输入只能为一个变量(多个变量只能组成元组输入)
#单源 分割多输出,多输入
input1=tf.keras.layers.Input(shape=[5],batch_size=32,name='in1')
input2=tf.keras.layers.Input(shape=[5],batch_size=32,name='in2')
zhong2in_3out=tf.keras.layers.Lambda(lambda x:(x[0]+x[1],x[0]*x[1],x[0]%x[1]))((input1,input2))
output1=tf.keras.layers.Dense(30)(zhong2in_3out[0])
output2=tf.keras.layers.Dense(30)(zhong2in_3out[1])
output3=tf.keras.layers.Dense(30)(zhong2in_3out[2])
model=tf.keras.Model(inputs=[input1,input2],outputs=[output1,output2,output3])
model.summary()
Model: "model_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
in1 (InputLayer) [(32, 5)] 0 []
in2 (InputLayer) [(32, 5)] 0 []
lambda (Lambda) ((32, 5), 0 ['in1[0][0]',
(32, 5), 'in2[0][0]']
(32, 5))
dense_3 (Dense) (32, 30) 180 ['lambda[0][0]']
dense_4 (Dense) (32, 30) 180 ['lambda[0][1]']
dense_5 (Dense) (32, 30) 180 ['lambda[0][2]']
==================================================================================================
Total params: 540
Trainable params: 540
Non-trainable params: 0
__________________________________________________________________________________________________