TensorFlow 之 Custom training: basics

Custom training: basics

​ 这一小节,我们使用前面介绍过的一些TF primitives来做一些简单的机器学习。

官方推荐使用tf.keras这样高阶的api,但是strong foundation是重要的,所以这一节cover neural network training from first principles

In this tutorial we covered Variables and built and trained a simple linear model using the TensorFlow primitives discussed so far.

具体见代码

code1

import tensorflow as tf

tf.enable_eager_execution()

x = tf.zeros([10,10])
x += 2
print(x.numpy())

v = tf.Variable(1.0)
assert v.numpy() == 1.0

#Re-assign the value
v.assign(3.0)
assert v.numpy() == 3.0

#use 'v' in a TF opration like tf.square() and reassign
v.assign(tf.square(v))
print(v.numpy())

v = v * 2

print(v.numpy())

code2

'''
Example:Fitting a linear model

Let's now put the few concepts we have so far 

Tensor, Gradient Tape , Variable

to build and train a simple model 

this typically involves a few steps
1.define the model
2.define a loss function
3.Obtain training data
4.run through the training data and use an 'optimizer' to 
adjust the variables to fit the data
'''
import tensorflow as tf
import matplotlib.pyplot as plt 
tf.enable_eager_execution() # dont forget it


# create model
class Model(object):
    def __init__(self):
        # Initialize variable to(5.0 , 0.0)
        # In practice ,these should be initialized to random values
        self.W = tf.Variable(5.0)
        self.b = tf.Variable(0.0)

    def __call__(self, x):
        return self.W * x + self.b


model = Model()
# print(model(3.0).numpy())


# define loss
def loss(predicted_y, desired_y):
    return tf.reduce_mean(tf.square(predicted_y - desired_y))


# make training data
TRUE_W = 3.0
TRUE_b = 2.0
NUM_EXAMPLES = 1000

inputs = tf.random_normal(shape=[NUM_EXAMPLES])
noise = tf.random_normal(shape=[NUM_EXAMPLES])
outputs = inputs * TRUE_W + TRUE_b + noise 

# visualize 
plt.scatter(inputs,outputs,c='b',label='training data')
plt.scatter(inputs,model(inputs),c='r',label='model')
plt.legend()

print('current loss: ')
print(loss(model(inputs), outputs).numpy())


# define a training loop
'''
There are many variants of the gradient descent scheme
that are captured in tf.train.Optimizer implementations. 

in the spirit of building from first principles,
in this particular example we will implement the 
basic math ourselves.
'''
def train(model, inputs, outputs, learning_rate):
    with tf.GradientTape() as t: 
        current_loss = loss(model(inputs), outputs)
    dW,db = t.gradient(current_loss, [model.W, model.b])
    model.W.assign_sub(learning_rate * dW)
    model.b.assign_sub(learning_rate * db)



'''
finally ,let's repeatly run through the training data and see how W and b evolve
'''
model = Model()

# Collect the history of W-values and b-values to plot later
Ws,bs=[],[]
epochs = range(10)
for epoch in epochs:
    Ws.append(model.W.numpy())
    bs.append(model.b.numpy())
    current_loss = loss(model(inputs),outputs)
    print('Epoch %2d: W=%1.2f b=%1.2f, loss=%2.5f' % 
        (epoch,Ws[-1],bs[-1],current_loss))
    train(model, inputs, outputs, learning_rate = 0.1)


#plot it
plt.figure()
plt.plot(epochs, Ws, 'r',
         epochs, bs, 'b')
plt.plot([TRUE_W]*len(epochs), 'r--',
         [TRUE_b]*len(epochs), 'b--')
plt.legend(['W','b','true W','true b'])
plt.show()

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值