kera中train_on_batch 自定义训练过程

train_on_batch  可以在keras中自定义精细化训练过程使用。

使用示例:

import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import numpy as np
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Dense


x = np.linspace(-2, 6, 200)
np.random.shuffle(x)
y = 0.5 * x + 2 + 0.15 * np.random.randn(200,)

# # plot the data
# plt.scatter(x, y)
# plt.show()

model = Sequential()
model.add(Dense(units=1, input_dim=1))

model.compile(loss='mse', optimizer='sgd')

# train the first 160 data
x_train, y_train = x[0:160], y[0:160]

# start training
# model.fit(x_train, y_train, epochs=100, batch_size=64)

for step in range(0, 500):
    cost = model.train_on_batch(x_train, y_train)
    if step % 20 == 0:
        print('cost is %f' % cost)

# test on the rest 40 data
x_test, y_test = x[160:], y[160:]

# start evaluation
cost_eval = model.evaluate(x_test, y_test, batch_size=40)
print('evaluation lost %f' % cost_eval)

model.summary()

w, b = model.layers[0].get_weights()
print('weight %f , bias %f' % (w, b))

# start prediction
y_prediction = model.predict(x_test)
plt.scatter(x_test, y_test)
plt.plot(x_test, y_prediction)
plt.show()

一个很不错的博客:

https://blog.csdn.net/xovee/article/details/91357143?utm_medium=distribute.pc_relevant.none-task-blog-baidujs-2

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值