tfe 配合 Keras model 线性拟合 和 自己处理梯度进行线性拟合

原文链接: tfe 配合 Keras model 线性拟合 和 自己处理梯度进行线性拟合

上一篇: tfe 简单 案例 自动优化 线性拟合

下一篇: tfe 模型保存和载入

简单线性拟合,自己处理梯度

import tensorflow as tf

tf.enable_eager_execution()

# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random_normal([NUM_EXAMPLES])
noise = tf.random_normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise


def prediction(input, weight, bias):
    return input * weight + bias


# A loss function using mean-squared error
def loss(weights, biases):
    error = prediction(training_inputs, weights, biases) - training_outputs
    return tf.reduce_mean(tf.square(error))


# Return the derivative of loss with respect to weight and bias
def grad(weights, biases):
    with tf.GradientTape() as tape:
        loss_value = loss(weights, biases)
    return tape.gradient(loss_value, [weights, biases])


train_steps = 200
learning_rate = 0.1
# Start with arbitrary values for W and B on the same batch of data
W = tf.Variable(5.)
B = tf.Variable(10.)

print("init loss ", loss(W, B))

for i in range(train_steps):
    dW, dB = grad(W, B)
    W.assign_sub(dW * learning_rate)
    B.assign_sub(dB * learning_rate)
    if i % 20 == 0:
        print("Loss ", i, loss(W, B))

print("Final loss ", loss(W, B))
print("w,b ", W.numpy(), B.numpy())

结果

init loss  tf.Tensor(68.62634, shape=(), dtype=float32)
Loss  0 tf.Tensor(44.42792, shape=(), dtype=float32)
Loss  20 tf.Tensor(1.0371987, shape=(), dtype=float32)
Loss  40 tf.Tensor(1.0309802, shape=(), dtype=float32)
Loss  60 tf.Tensor(1.0309793, shape=(), dtype=float32)
Loss  80 tf.Tensor(1.0309793, shape=(), dtype=float32)
Loss  100 tf.Tensor(1.0309793, shape=(), dtype=float32)
Loss  120 tf.Tensor(1.0309793, shape=(), dtype=float32)
Loss  140 tf.Tensor(1.0309793, shape=(), dtype=float32)
Loss  160 tf.Tensor(1.0309793, shape=(), dtype=float32)
Loss  180 tf.Tensor(1.0309793, shape=(), dtype=float32)
Final loss  tf.Tensor(1.0309793, shape=(), dtype=float32)
w,b  2.9830043 2.0019853

使用优化器和Keras

继承Keras的Model类,然后自定义loss和网络结构

最后在迭代中使用优化器优化即可

优化器中传入的是一个函数,每次回调用该函数,根据函数返回的loss优化参数

import tensorflow as tf

tf.enable_eager_execution()


class Model(tf.keras.Model):
    def __init__(self):
        super(Model, self).__init__()
        self.W = tf.Variable(5., name='weight')
        self.B = tf.Variable(10., name='bias')

    def call(self, inputs):
        return inputs * self.W + self.B


# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 10000
training_inputs = tf.random_normal([NUM_EXAMPLES])
noise = tf.random_normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise


# The loss function to be optimized
def loss(model, inputs, targets):
    error = model(inputs) - targets
    return tf.reduce_mean(tf.square(error))


def grad(model, inputs, targets):
    with tf.GradientTape() as tape:
        loss_value = loss(model, inputs, targets)
    return tape.gradient(loss_value, [model.W, model.B])


# Define:
# 1. A model.
# 2. Derivatives of a loss function with respect to model parameters.
# 3. A strategy for updating the variables based on the derivatives.
model = Model()
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)

print('init loss ', loss(model, training_inputs, training_outputs))

# Training loop
for i in range(300):
    grads = grad(model, training_inputs, training_outputs)
    optimizer.apply_gradients(zip(grads, [model.W, model.B]),
                              global_step=tf.train.get_or_create_global_step())
    if i % 20 == 0:
        print('loss ', i, loss(model, training_inputs, training_outputs))

print("Final loss ", loss(model, training_inputs, training_outputs))
print('w ,b ', model.W.numpy(), model.B.numpy())

运行结果

init loss  tf.Tensor(68.86918, shape=(), dtype=float32)
loss  0 tf.Tensor(66.18292, shape=(), dtype=float32)
loss  20 tf.Tensor(30.068779, shape=(), dtype=float32)
loss  40 tf.Tensor(13.970341, shape=(), dtype=float32)
loss  60 tf.Tensor(6.79418, shape=(), dtype=float32)
loss  80 tf.Tensor(3.5952754, shape=(), dtype=float32)
loss  100 tf.Tensor(2.169298, shape=(), dtype=float32)
loss  120 tf.Tensor(1.5336366, shape=(), dtype=float32)
loss  140 tf.Tensor(1.2502754, shape=(), dtype=float32)
loss  160 tf.Tensor(1.1239598, shape=(), dtype=float32)
loss  180 tf.Tensor(1.0676513, shape=(), dtype=float32)
loss  200 tf.Tensor(1.0425501, shape=(), dtype=float32)
loss  220 tf.Tensor(1.0313606, shape=(), dtype=float32)
loss  240 tf.Tensor(1.0263724, shape=(), dtype=float32)
loss  260 tf.Tensor(1.024149, shape=(), dtype=float32)
loss  280 tf.Tensor(1.0231577, shape=(), dtype=float32)
Final loss  tf.Tensor(1.0227305, shape=(), dtype=float32)
w ,b  3.0170832 2.0243566

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值