tensorflow2笔记:函数式API和tensorboard试用

预先导入数据

from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler

查看数据样式

housing=fetch_california_housing()
housing.data.shape,housing.target.shape
((20640, 8), (20640,))

归一化数据,划分数据集为训练集、测试集、验证集

scaler=StandardScaler()
x_data=scaler.fit_transform(housing.data)
x_train_full,x_test,y_train_full,y_test=train_test_split(x_data,housing.target)
x_train,x_valid,y_train,y_valid=train_test_split(x_train_full,y_train_full)

导入tensorflow2模块

import tensorflow as tf

运用函数式API构建tensorflow2模型

注意tf.keras.layers.Input为函数
运用函数式构建模型,就和运用函数一样,除了建立输入层,后续的层都以前面层为参数

input_=tf.keras.layers.Input(shape=x_train.shape[1:])
hidden1=tf.keras.layers.Dense(30,activation='elu',kernel_initializer='he_normal')(input_)
hidden2=tf.keras.layers.Dense(30,activation='elu',kernel_initializer='he_normal')(hidden1)
# 注意这里合并了input_和hidden2的输入
concat=tf.keras.layers.Concatenate()([input_,hidden2])
output=tf.keras.layers.Dense(1)(concat)
model=tf.keras.Model(inputs=[input_],outputs=[output])
model.compile(loss=tf.keras.losses.mean_squared_error,
             optimizer=tf.keras.optimizers.SGD(learning_rate=0.001,momentum=0.9))

编写tensorboard的初始化名称的函数,构建callbacks

注意:这样每次运行都会产生日期作为日志的名称,运行完后,在当前目录命令行键入tensorboard --logdir=logs/logs_1 --port=6006,然后启动浏览器,登录localhost:6006,即可查看。

import os
logs_root=os.path.join(os.curdir,'logs')
logs1_root=os.path.join(logs_root,'logs_1')
def get_run_logdir():
    import time
    run_id=time.strftime('run_%Y_%m_%d-%H_%M_%S')
    return os.path.join(logs1_root,run_id)
run_logdir=get_run_logdir()
tensorboard_cb=tf.keras.callbacks.TensorBoard(run_logdir)

开始训练

history=model.fit(x_train,y_train,epochs=20,validation_data=(x_valid,y_valid),callbacks=[tensorboard_cb])
Epoch 1/20
363/363 [==============================] - 2s 4ms/step - loss: 0.7307 - val_loss: 0.5071
Epoch 2/20
363/363 [==============================] - 1s 3ms/step - loss: 0.4633 - val_loss: 0.5007
Epoch 3/20
363/363 [==============================] - 1s 3ms/step - loss: 0.4517 - val_loss: 0.4579
Epoch 4/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4497 - val_loss: 0.4336
Epoch 5/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4274 - val_loss: 0.4481
Epoch 6/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4309 - val_loss: 0.4558
Epoch 7/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4220 - val_loss: 0.4237
Epoch 8/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4082 - val_loss: 0.4114
Epoch 9/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4045 - val_loss: 0.4049
Epoch 10/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4059 - val_loss: 0.3948
Epoch 11/20
363/363 [==============================] - 1s 4ms/step - loss: 0.3985 - val_loss: 0.3989
Epoch 12/20
363/363 [==============================] - 1s 4ms/step - loss: 0.3929 - val_loss: 0.3944
Epoch 13/20
363/363 [==============================] - 1s 4ms/step - loss: 0.3904 - val_loss: 0.3840
Epoch 14/20
363/363 [==============================] - 1s 4ms/step - loss: 0.3872 - val_loss: 0.3864
Epoch 15/20
363/363 [==============================] - 1s 4ms/step - loss: 0.3844 - val_loss: 0.3766
Epoch 16/20
363/363 [==============================] - 1s 4ms/step - loss: 0.3817 - val_loss: 0.4225
Epoch 17/20
363/363 [==============================] - 1s 4ms/step - loss: 0.3852 - val_loss: 0.3803
Epoch 18/20
363/363 [==============================] - 1s 4ms/step - loss: 0.3798 - val_loss: 0.3818
Epoch 19/20
363/363 [==============================] - 1s 4ms/step - loss: 0.3771 - val_loss: 0.3787
Epoch 20/20
363/363 [==============================] - 1s 4ms/step - loss: 0.3754 - val_loss: 0.3653
import pandas as pd

绘制损失曲线图

pd.DataFrame(history.history).plot()

训练损失和验证损失

函数式API的另一个尝试

在神经网络中,除了tf.keras.layers.Concatenate()类可以承接多个输入,我们也可以自己写个简单的层。这层可以分开计算多个输入,也可以有多个不同的输出(不是像每个多输出网络的输入都是承接上层的所有输出)
tf.keras.layers.Lambda()可以将简单的lambda创建的函数转化为层
注意:层的输入只能为一个变量(多个变量只能组成元组输入)

#单源 分割多输出,多输入
input1=tf.keras.layers.Input(shape=[5],batch_size=32,name='in1')
input2=tf.keras.layers.Input(shape=[5],batch_size=32,name='in2')
zhong2in_3out=tf.keras.layers.Lambda(lambda x:(x[0]+x[1],x[0]*x[1],x[0]%x[1]))((input1,input2))
output1=tf.keras.layers.Dense(30)(zhong2in_3out[0])
output2=tf.keras.layers.Dense(30)(zhong2in_3out[1])
output3=tf.keras.layers.Dense(30)(zhong2in_3out[2])
model=tf.keras.Model(inputs=[input1,input2],outputs=[output1,output2,output3])
model.summary()
Model: "model_1"
__________________________________________________________________________________________________
 Layer (type)                   Output Shape         Param #     Connected to                     
==================================================================================================
 in1 (InputLayer)               [(32, 5)]            0           []                               
                                                                                                  
 in2 (InputLayer)               [(32, 5)]            0           []                               
                                                                                                  
 lambda (Lambda)                ((32, 5),            0           ['in1[0][0]',                    
                                 (32, 5),                         'in2[0][0]']                    
                                 (32, 5))                                                         
                                                                                                  
 dense_3 (Dense)                (32, 30)             180         ['lambda[0][0]']                 
                                                                                                  
 dense_4 (Dense)                (32, 30)             180         ['lambda[0][1]']                 
                                                                                                  
 dense_5 (Dense)                (32, 30)             180         ['lambda[0][2]']                 
                                                                                                  
==================================================================================================
Total params: 540
Trainable params: 540
Non-trainable params: 0
__________________________________________________________________________________________________
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

起名大废废

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值