4. 十分钟train tensorflow模型

4. Tensorflow执行流程

  1. 数据(DataSet)
  2. 定义神经网络结构前向传播结果(Network structure and forward)
  3. 定义损失函数优化算法
  4. 会话(Session), 使用优化算法进行优化
4.1 数据

使用简单的数据集,随机生成x1, x2, 然后将2*x1 + x2作为预测值,
简单写下Data类, 包括在train阶段需要的get_train_next_batch

import tensorflow as tf
from numpy.random import RandomState
from absl import flags, app

FLAGS = flags.FLAGS

flags.DEFINE_integer('num_batch_size', 20, 'batch size')
flags.DEFINE_integer('num_data', 100000, 'num data size')
flags.DEFINE_integer('num_train_step', 10000, 'num train step')

SEED = 1
NUM_FEATURE = 2

class Data():
    '''
    Data
    '''
    def __init__(self, num_data):
        self.num_data = num_data
        self._generate_data()

    def _generate_data(self):
        rs = RandomState(SEED)
        all_data = rs.rand(self.num_data, NUM_FEATURE)
        y = [[2 * x1 + x2] for x1, x2 in all_data]
        self.num_train_data = int(self.num_data / 2)
        self.num_test_data = self.num_data - self.num_train_data

        self.train_data_x = all_data[0: self.num_train_data, :]
        self.train_data_y = y[0: self.num_train_data]

        self.test_data_x = all_data[self.num_train_data: self.num_data, :]
        self.test_data_y = y[self.num_train_data: self.num_data]

    def get_all_test_data(self):
        return self.test_data_x, self.test_data_y

    def get_batch_data(self, step, batch_size):
        start_index = batch_size * step % self.num_train_data
        if start_index == self.num_train_data:
            self.get_batch_data(step + 1, batch_size)
        end_index = min(start_index + batch_size, self.num_train_data)
        return self.train_data_x[start_index: end_index, :], \
               self.train_data_y[start_index: end_index]
4.2 定义神经网络结构和前向传播结果

因为数据集很简单,使用单层的全连接神经网络, 网络结构为
input: 2
hidden_size: 10
y: 1

def _sigmoid(x):
    return 1 / (1+ tf.math.exp(-x))

def inference(intput_tensor, hidden_layer):
    with tf.variable_scope("weight1", reuse=tf.AUTO_REUSE):
        w1 = tf.get_variable("w1",
                             shape=[NUM_FEATURE, hidden_layer],
                             initializer=tf.initializers.orthogonal())
        w2 = tf.get_variable("w2",
                             shape=[hidden_layer, 1],
                             initializer=tf.initializers.orthogonal())
        b1 = tf.get_variable("b1",
                             shape=[1, hidden_layer,],
                             initializer=tf.initializers.orthogonal())

    hidden_output = _sigmoid(tf.matmul(intput_tensor, w1) + b1)
    y_predict = tf.matmul(hidden_output, w2)
    return y_predict

input = tf.placeholder(dtype=tf.float32, shape=[None, 2], name="inputs")
y_true = tf.placeholder(dtype=tf.float32, shape=[None, 1], name="y_true")

# inference result
y_predict = inference(input, 10)
print(y_predict)
print(y_true)
# loss
loss = tf.math.reduce_mean(tf.math.square(y_predict - y_true))

# define optimizer
op = tf.train.AdamOptimizer(learning_rate=0.001)
train_step = op.minimize(loss)
init =tf.global_variables_initializer()
4.3. 生成会话进行优化
def main(argv):
    with tf.Session() as sess:
        all_data = Data(FLAGS.num_data)
        sess.run(init)
        for i in range(FLAGS.num_train_step):
            train_x, train_y = all_data.get_batch_data(i, FLAGS.num_batch_size)
            _, train_loss = sess.run([train_step, loss], feed_dict={input: train_x, y_true: train_y})
            # get test error
            test_x, test_y = all_data.get_all_test_data()
            test_loss = sess.run([loss], feed_dict={input: test_x, y_true: test_y})
            if i % 1000 == 0:
                print('Step: {}, training loss: {}, test_loss: {}'.format(i, train_loss, test_loss))

if __name__ == "__main__":
    app.run(main)

4.4运行结果:

Step: 0, training loss: 2.830268383026123, test_loss: [3.4940388]
Step: 1000, training loss: 0.26916900277137756, test_loss: [0.28658926]
Step: 2000, training loss: 0.15868720412254333, test_loss: [0.13481888]
Step: 3000, training loss: 0.01826903596520424, test_loss: [0.013482004]
Step: 4000, training loss: 0.0002820939407683909, test_loss: [0.0009895641]
Step: 5000, training loss: 0.0005696517764590681, test_loss: [0.0006386156]
Step: 6000, training loss: 0.00033680087653920054, test_loss: [0.00041960296]
Step: 7000, training loss: 0.0002564012538641691, test_loss: [0.0002899276]
Step: 8000, training loss: 0.000306981906760484, test_loss: [0.00021196381]
Step: 9000, training loss: 7.767449278617278e-05, test_loss: [0.00018550489]
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值