有了Keras之后,事情变得简单
用Keras构建模型与训练
模型构建
首先,明确Keras中有两个重要的模块:model和layer
Keras中layer分装了常见的全连接层,cnn,rnn等,我们只需要调用这些层,封装进model中,就完成了网络的构建,使用Keras后,网络定义变得高效,标准,快速。
使用Sequential()构建模型
import tensorflow as tf
import numpy as np
model = tf.keras.models.Sequential()
model.add(tf.keras.Input(shape=(15,20),name='input'))
model.add(tf.keras.layers.Dense(10,activation = 'relu', name ='dense1'))
model.add(tf.keras.layers.SimpleRNN(128))
model.add(tf.keras.layers.Dense(3,activation = 'softmax', name ='pred'))
print(model.summary())
结果:
先定义层再分装
import tensorflow as tf
import numpy as np
input = tf.keras.Input(shape=(15,20),name='input')
out = tf.keras.layers.Dense(10,activation = 'relu', name ='dense1')(input)
out = tf.keras.layers.SimpleRNN(128)(out)
out = tf.keras.layers.Dense(3,activation = 'softmax', name ='pred')(out)
model = tf.keras.Model(inputs=input,outputs=out)
print(model.summary())
结果
模型训练评估
keras Model 的compile、fit、evaluate方法
当模型完成后,可以通过keras Model 的compile、fit、evaluate来完成训练评估
三个函数就完成了训练、测试
compile
model.compile(
optimizer = tf.keras.optimizers.Adam(1e-3),
loss= tf.keras.losses.sparse_categorical_crossentropy,
metrics=[tf.keras.metrics.sparse_categorical_accuracy]
)
tf.keras.Model.compile接收三个主要参数:
1、optimizer:优化器
2、loss:损失函数
3、metrics:评估指标
可以直接用Keras中对应函数
fit
model.fit(x_train,y_train,epochs=num_epoch,batch_size=batch_size)
或者输入为经TensorFlow.data.Dataset预处理后的dataset形式
dataset = tf.data.Dataset.from_tensor_slices((x_train,y_train))
dataset = dataset.shuffle(buffer_size=36000).batch(1000)
model.fit(dataset,epochs=num_epoch)
1、x_trian:训练数据
2、y_train:测试数据
3、epochs:迭代次数
4、batch_size:批大小
evaluate
print(model.evaluate(x_test,y_test))
注意:
loss = tf.keras.losses.sparse_categorical_crossentropy(y_true, y_pred)
这个损失函数的输入y_true是一维的,而正向传播的概率是多维的
官方文档:
y_true = [1, 2]
y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]
loss = tf.keras.losses.sparse_categorical_crossentropy(y_true, y_pred)
整体代码及效果
import tensorflow as tf
import numpy as np
#导入训练集
x_train = np.loadtxt(open("radar2/train_x.csv",'rb'),delimiter=",",skiprows=0)
y_train = np.loadtxt(open("radar2/train_y.csv",'rb'),delimiter=",",skiprows=0)
#导入测试集
x_test = np.loadtxt(open("radar2/test_x.csv",'rb'),delimiter=",",skiprows=0)
y_test = np.loadtxt(open("radar2/test_y.csv",'rb'),delimiter=",",skiprows=0)
print(y_test)
x_train = np.reshape(x_train,[36000,15,20])
x_test = np.reshape(x_test,[18000,15,20])
dataset = tf.data.Dataset.from_tensor_slices((x_train,y_train))
dataset = dataset.shuffle(buffer_size=36000).batch(1000)
dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE)
num_epoch = 200
input = tf.keras.Input(shape=(15,20),name='input')
out = tf.keras.layers.Dense(10,activation = 'relu', name ='dense1')(input)
out = tf.keras.layers.SimpleRNN(128)(out)
out = tf.keras.layers.Dense(3,activation = 'softmax', name ='pred')(out)
model = tf.keras.Model(inputs=input,outputs=out)
print(model.summary())
model.compile(
optimizer = tf.keras.optimizers.Adam(1e-3),
loss= tf.keras.losses.sparse_categorical_crossentropy,
metrics=[tf.keras.metrics.sparse_categorical_accuracy]
)
model.fit(dataset,epochs=num_epoch)
#model.fit(x_train,y_train,epochs=num_epoch,batch_size=batch_size)
print(model.evaluate(x_test,y_test))
tf.GradientTape方法
tf.GradientTape是另外一种训练方式,为TensorFlow的即使执行模式和图执行模式提供了统一的API
整体代码及效果
import tensorflow as tf
import data
import numpy as np
num_epoch =5
#加载数据集
dataset,data_set=data.data()
dataset = dataset.shuffle(buffer_size=28800).batch(720)
dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE)
data_set = data_set.batch(1260)
#构建model
input = tf.keras.Input(shape=(30,10),name='input')
#out = tf.keras.layers.GRU(60,return_sequences=True)(input)
out = tf.keras.layers.SimpleRNN(25)(input)
out = tf.keras.layers.Dense(3,activation = 'softmax', name ='pred')(out)
model = tf.keras.Model(inputs=input,outputs=out)
print(model.summary())
optimizer = tf.keras.optimizers.Adam(1e-2)
for e in range(num_epoch):
for xtrain,ytrain in dataset:
with tf.GradientTape() as tape:
loss = tf.reduce_mean(tf.keras.losses.categorical_crossentropy(ytrain,model(xtrain)))
grads = tape.gradient(loss,model.trainable_variables)
optimizer.apply_gradients(grads_and_vars=zip(grads,model.trainable_variables))
print("epoch:%d loss:%f"%(e,loss.numpy()))
m = tf.keras.metrics.CategoricalAccuracy()
for xtest,ytest in data_set:
m.update_state(model(xtest),ytest)
print("acc: %f" % m.result())
结果如下
epoch:0 loss:0.770372
epoch:1 loss:0.635666
epoch:2 loss:0.457679
epoch:3 loss:0.377299
epoch:4 loss:0.363071
acc: 0.847421