我的tensorflow学习笔记(3)

一、小技巧

1、查看tensorflow安装路径

import tensorflow as tf
tf.__path__ #查看路径
tf.__version__ #查看版本

二、linear regression

1、源码来自stanford-tensorflow-tutorials 03_linreg

2、在自己写的时候发现一个问题,即tf.add与+的区别。百度后的结果:

    tf.add(a,b)与a+b没有精度区别,在计算精度上,二者并没有差别。运算符重载的形式a+b,会在内部转换为,a.__add__(b),而a.__add__(b)会再一次地映射为tf.add,在 math_ops.py中相关的映射如下:

_OverrideBinaryOperatorHelper(gen_math_ops.add, "add")
    参考链接:  In tensorflow what is the difference between tf.add and operator (+)?

3、关键代码        

# Step 6: using gradient descent with learning rate of 0.001 to minimize loss
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001).minimize(loss)
# Step 8: train the model for 100 epochs
	for i in range(100): 
		total_loss = 0
		for x, y in data:
			# Session execute optimizer and fetch values of loss
			_, l = sess.run([optimizer, loss], feed_dict={X: x, Y:y}) 
			total_loss += l
		print('Epoch {0}: {1}'.format(i, total_loss/n_samples))

4、hubber loss

    使用hubber loss来减少离群值的weight:


    在tensorflow中,涉及图的控制流不能使用简单的If-else语句(这只在eager execution中可行),而应使用tf.cond:

tf.cond(pred, fn1, fn2, name=None)
    format:tf.cond(pred, fn1, fn2, name=None)
    Return :either fn1() or fn2() based on the boolean predicate `pred`.(注意这里,也就是说'fnq'和‘fn2’是两个函数)
   arguments:`fn1` and `fn2` both return lists of output tensors. `fn1` and `fn2` must have the same non-zero number and type of outputs('fnq'和‘fn2’返回的是非零的且类型相同的输出)
def huber_loss(labels, predictions, delta=14.0):
    residual = tf.abs(labels - predictions)
    def f1(): return 0.5 * tf.square(residual)
    def f2(): return delta * residual - 0.5 * tf.square(delta)
    return tf.cond(residual < delta, f1, f2)

5、tf.data

   使用tf.data而非placeholder

# Step 1: read in the data
data, n_samples = utils.read_birth_life_data(DATA_FILE)

# Step 2: create Dataset and iterator
dataset = tf.data.Dataset.from_tensor_slices((data[:,0], data[:,1]))

iterator = dataset.make_initializable_iterator()
X, Y = iterator.get_next()
# Step 8: train the model for 100 epochs
    for i in range(100):
        sess.run(iterator.initializer) # initialize the iterator
        total_loss = 0
        try:
            while True:
                _, l = sess.run([optimizer, loss]) 
                total_loss += l
        except tf.errors.OutOfRangeError:
            pass
            
        print('Epoch {0}: {1}'.format(i, total_loss/n_samples))

    有两种iterator:

iterator = dataset.make_one_shot_iterator()
#Iterates through the dataset exactly once. No need to initialization.
iterator = dataset.make_initializable_iterator()
#Iterates through the dataset as many times as we want. Need to initialize with each epoch.

    其中make_one_shot_iterator()无需初始化,但只能从头到尾读取一次;而make_initializable_iterator()可以读取任意次,但每次重新读取都要进行初始化。

6、optimizer

optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001).minimize(loss)
sess.run([optimizer]) 

    采用梯度下降法,最小化loss。默认的,optimizer会训练与其目标函数有关的所有可训练的变量。如果有不想训练的对象,需要加上关键词:

global_step = tf.Variable(0, trainable=False, dtype=tf.int32)






  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值